id
int64 1.14B
2.23B
| labels_url
stringlengths 75
75
| body
stringlengths 2
33.9k
⌀ | updated_at
stringlengths 20
20
| number
int64 3.76k
6.79k
| milestone
dict | repository_url
stringclasses 1
value | draft
bool 2
classes | labels
listlengths 0
4
| created_at
stringlengths 20
20
| comments_url
stringlengths 70
70
| assignee
dict | timeline_url
stringlengths 70
70
| title
stringlengths 1
290
| events_url
stringlengths 68
68
| active_lock_reason
null | user
dict | assignees
listlengths 0
3
| performed_via_github_app
null | state_reason
stringclasses 3
values | author_association
stringclasses 3
values | closed_at
stringlengths 20
20
⌀ | pull_request
dict | node_id
stringlengths 18
19
| comments
sequencelengths 0
30
| reactions
dict | state
stringclasses 2
values | locked
bool 1
class | url
stringlengths 61
61
| html_url
stringlengths 49
51
| is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,193,172,074 | https://api.github.com/repos/huggingface/datasets/issues/6740/labels{/name} | ### Feature request
Request for adding rasterio support to load geotiff as a part of ImageFolder, instead of using PIL
### Motivation
As of now, there are many datasets in HuggingFace Hub which are predominantly focussed towards RemoteSensing or are from RemoteSensing. The current ImageFolder (if I have understood correctly) uses PIL. This is not really optimized because mostly these datasets have images with many channels and additional metadata. Using PIL makes one loose it unless we provide a custom script. Hence, maybe an API could be added to have this in common?
### Your contribution
If the issue is accepted - i can contribute the code, because I would like to have it automated and generalised. | 2024-03-27T18:19:48Z | 6,740 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-03-18T20:00:39Z | https://api.github.com/repos/huggingface/datasets/issues/6740/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6740/timeline | Support for loading geotiff files as a part of the ImageFolder | https://api.github.com/repos/huggingface/datasets/issues/6740/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/31362090?v=4",
"events_url": "https://api.github.com/users/sunny1401/events{/privacy}",
"followers_url": "https://api.github.com/users/sunny1401/followers",
"following_url": "https://api.github.com/users/sunny1401/following{/other_user}",
"gists_url": "https://api.github.com/users/sunny1401/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sunny1401",
"id": 31362090,
"login": "sunny1401",
"node_id": "MDQ6VXNlcjMxMzYyMDkw",
"organizations_url": "https://api.github.com/users/sunny1401/orgs",
"received_events_url": "https://api.github.com/users/sunny1401/received_events",
"repos_url": "https://api.github.com/users/sunny1401/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sunny1401/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sunny1401/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sunny1401"
} | [] | null | not_planned | NONE | 2024-03-27T18:19:20Z | null | I_kwDODunzps6CuSZq | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6740/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6740 | https://github.com/huggingface/datasets/issues/6740 | false |
2,192,730,134 | https://api.github.com/repos/huggingface/datasets/issues/6739/labels{/name} | Closes https://github.com/huggingface/datasets/issues/6252 | 2024-03-19T15:35:57Z | 6,739 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-18T16:43:06Z | https://api.github.com/repos/huggingface/datasets/issues/6739/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6739/timeline | Transpose images with EXIF Orientation tag | https://api.github.com/repos/huggingface/datasets/issues/6739/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2024-03-19T15:29:42Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6739.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6739",
"merged_at": "2024-03-19T15:29:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6739.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6739"
} | PR_kwDODunzps5p-Bwe | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6739). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005295 / 0.011353 (-0.006058) | 0.003402 / 0.011008 (-0.007606) | 0.062860 / 0.038508 (0.024352) | 0.029627 / 0.023109 (0.006518) | 0.238359 / 0.275898 (-0.037539) | 0.262940 / 0.323480 (-0.060540) | 0.003077 / 0.007986 (-0.004909) | 0.002676 / 0.004328 (-0.001652) | 0.048731 / 0.004250 (0.044480) | 0.043989 / 0.037052 (0.006936) | 0.255702 / 0.258489 (-0.002787) | 0.282667 / 0.293841 (-0.011174) | 0.028019 / 0.128546 (-0.100527) | 0.010195 / 0.075646 (-0.065451) | 0.205472 / 0.419271 (-0.213800) | 0.036551 / 0.043533 (-0.006982) | 0.243282 / 0.255139 (-0.011857) | 0.261925 / 0.283200 (-0.021274) | 0.020506 / 0.141683 (-0.121177) | 1.137228 / 1.452155 (-0.314927) | 1.183935 / 1.492716 (-0.308782) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100290 / 0.018006 (0.082284) | 0.316279 / 0.000490 (0.315790) | 0.000239 / 0.000200 (0.000039) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017979 / 0.037411 (-0.019432) | 0.061616 / 0.014526 (0.047090) | 0.072989 / 0.176557 (-0.103568) | 0.118667 / 0.737135 (-0.618468) | 0.074266 / 0.296338 (-0.222072) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287971 / 0.215209 (0.072762) | 2.845235 / 2.077655 (0.767581) | 1.501983 / 1.504120 (-0.002137) | 1.389824 / 1.541195 (-0.151370) | 1.415616 / 1.468490 (-0.052874) | 0.568727 / 4.584777 (-4.016050) | 2.368330 / 3.745712 (-1.377382) | 2.844329 / 5.269862 (-2.425532) | 1.809038 / 4.565676 (-2.756639) | 0.063699 / 0.424275 (-0.360576) | 0.004972 / 0.007607 (-0.002635) | 0.340092 / 0.226044 (0.114048) | 3.369146 / 2.268929 (1.100217) | 1.863423 / 55.444624 (-53.581201) | 1.608334 / 6.876477 (-5.268142) | 1.624479 / 2.142072 (-0.517594) | 0.632439 / 4.805227 (-4.172788) | 0.116862 / 6.500664 (-6.383802) | 0.042558 / 0.075469 (-0.032911) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.967922 / 1.841788 (-0.873866) | 11.730612 / 8.074308 (3.656304) | 9.321333 / 10.191392 (-0.870059) | 0.142604 / 0.680424 (-0.537819) | 0.013934 / 0.534201 (-0.520267) | 0.285992 / 0.579283 (-0.293292) | 0.267639 / 0.434364 (-0.166724) | 0.324972 / 0.540337 (-0.215365) | 0.427077 / 1.386936 (-0.959859) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005806 / 0.011353 (-0.005547) | 0.003771 / 0.011008 (-0.007237) | 0.049542 / 0.038508 (0.011034) | 0.030182 / 0.023109 (0.007073) | 0.303923 / 0.275898 (0.028025) | 0.325623 / 0.323480 (0.002143) | 0.004327 / 0.007986 (-0.003659) | 0.002818 / 0.004328 (-0.001510) | 0.048237 / 0.004250 (0.043987) | 0.047490 / 0.037052 (0.010437) | 0.316556 / 0.258489 (0.058067) | 0.348352 / 0.293841 (0.054512) | 0.029444 / 0.128546 (-0.099102) | 0.010544 / 0.075646 (-0.065102) | 0.057382 / 0.419271 (-0.361890) | 0.056210 / 0.043533 (0.012677) | 0.305495 / 0.255139 (0.050356) | 0.321570 / 0.283200 (0.038370) | 0.019546 / 0.141683 (-0.122137) | 1.141732 / 1.452155 (-0.310423) | 1.223626 / 1.492716 (-0.269091) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093864 / 0.018006 (0.075858) | 0.309715 / 0.000490 (0.309226) | 0.000217 / 0.000200 (0.000017) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022047 / 0.037411 (-0.015364) | 0.074885 / 0.014526 (0.060359) | 0.088440 / 0.176557 (-0.088117) | 0.127033 / 0.737135 (-0.610103) | 0.089048 / 0.296338 (-0.207290) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292624 / 0.215209 (0.077415) | 2.877592 / 2.077655 (0.799937) | 1.607036 / 1.504120 (0.102916) | 1.487819 / 1.541195 (-0.053376) | 1.517318 / 1.468490 (0.048828) | 0.553321 / 4.584777 (-4.031456) | 2.415577 / 3.745712 (-1.330135) | 2.691411 / 5.269862 (-2.578450) | 1.743395 / 4.565676 (-2.822282) | 0.062187 / 0.424275 (-0.362088) | 0.005073 / 0.007607 (-0.002534) | 0.342907 / 0.226044 (0.116863) | 3.402054 / 2.268929 (1.133126) | 1.979481 / 55.444624 (-53.465143) | 1.702885 / 6.876477 (-5.173592) | 1.868279 / 2.142072 (-0.273794) | 0.640095 / 4.805227 (-4.165132) | 0.117138 / 6.500664 (-6.383526) | 0.042197 / 0.075469 (-0.033272) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.007495 / 1.841788 (-0.834292) | 12.037309 / 8.074308 (3.963001) | 10.227670 / 10.191392 (0.036278) | 0.149533 / 0.680424 (-0.530891) | 0.015282 / 0.534201 (-0.518919) | 0.287357 / 0.579283 (-0.291926) | 0.285109 / 0.434364 (-0.149255) | 0.324027 / 0.540337 (-0.216311) | 0.442482 / 1.386936 (-0.944454) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#19b40860acf3b3ba8db727fcf3b1b99ebb8d7e33 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6739/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6739 | https://github.com/huggingface/datasets/pull/6739 | true |
2,192,386,536 | https://api.github.com/repos/huggingface/datasets/issues/6738/labels{/name} | When i try to create a `Dataset` object with None values inside a dict column, like this:
```python
from datasets import Dataset, Features, Value
Dataset.from_dict(
{
"dict": [{"a": 0, "b": 0}, None],
}, features=Features(
{"dict": {"a": Value("int16"), "b": Value("int16")}}
)
)
```
i get `ValueError: Got None but expected a dictionary instead`.
At the same time, having None in _nested_ dict feature works, for example, this doesn't throw any errors:
```python
from datasets import Dataset, Features, Value, Sequence
dataset = Dataset.from_dict(
{
"list_dict": [[{"a": 0, "b": 0}], None],
"sequence_dict": [[{"a": 0, "b": 0}], None],
}, features=Features({
"list_dict": [{"a": Value("int16"), "b": Value("int16")}],
"sequence_dict": Sequence({"a": Value("int16"), "b": Value("int16")}),
})
)
```
Other types of features also seem to be nullable (but I haven't checked all of them).
Version of `datasets` is the latest atm (2.18.0)
Is this an expected behavior or a bug? | 2024-03-20T10:24:15Z | 6,738 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2024-03-18T14:31:47Z | https://api.github.com/repos/huggingface/datasets/issues/6738/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6738/timeline | Dict feature is non-nullable while nested dict feature is | https://api.github.com/repos/huggingface/datasets/issues/6738/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | null | completed | CONTRIBUTOR | 2024-03-19T20:05:20Z | null | I_kwDODunzps6CrSno | [
"It looks like a bug, by default every feature should be nullable.",
"I've linked a PR with a fix :)",
"@mariosasko awesome thank you!"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6738/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6738 | https://github.com/huggingface/datasets/issues/6738 | false |
2,190,198,425 | https://api.github.com/repos/huggingface/datasets/issues/6737/labels{/name} | ### Describe the bug
ValueError: Invalid pattern: '**' can only be an entire path component
when loading any dataset
### Steps to reproduce the bug
import datasets
ds = datasets.load_dataset("TokenBender/code_instructions_122k_alpaca_style")
### Expected behavior
loading the dataset successfully
### Environment info
- `datasets` version: 2.18.0
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.11.7
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.1
- `fsspec` version: 2023.12.2 | 2024-04-01T12:16:59Z | 6,737 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-16T19:28:46Z | https://api.github.com/repos/huggingface/datasets/issues/6737/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6737/timeline | Invalid pattern: '**' can only be an entire path component | https://api.github.com/repos/huggingface/datasets/issues/6737/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/28976175?v=4",
"events_url": "https://api.github.com/users/JPonsa/events{/privacy}",
"followers_url": "https://api.github.com/users/JPonsa/followers",
"following_url": "https://api.github.com/users/JPonsa/following{/other_user}",
"gists_url": "https://api.github.com/users/JPonsa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JPonsa",
"id": 28976175,
"login": "JPonsa",
"node_id": "MDQ6VXNlcjI4OTc2MTc1",
"organizations_url": "https://api.github.com/users/JPonsa/orgs",
"received_events_url": "https://api.github.com/users/JPonsa/received_events",
"repos_url": "https://api.github.com/users/JPonsa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JPonsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JPonsa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JPonsa"
} | [] | null | null | NONE | null | null | I_kwDODunzps6Ci8aZ | [
"I couldn't reproduce the issue on my side on MacOS, I guess the issue comes from the recent `fsspec` on Windows.\r\n\r\nCan you try downgrading to `fsspec==2023.9.2` for now ? It would also be great to investigate this and see if we need a fix in `datasets` or `fsspec`",
"I had the same issue! \r\nDowngrading to fsspec from 2023.10.0 to 2023.9.2 solved it for me.\r\n\r\n(env: python 3.11.7, datasets version: 2.15.0, Windows 10 22H2, Build 19045.4170)\r\n\r\nThanks a lot!"
] | {
"+1": 7,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 7,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6737/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6737 | https://github.com/huggingface/datasets/issues/6737 | false |
2,190,181,422 | https://api.github.com/repos/huggingface/datasets/issues/6736/labels{/name} | ### Feature request
I'm a huge fan of the current HF Datasets `webdataset` integration (especially the built-in streaming support). However, I'd love to upload some robotics and multimodal datasets I've processed for use with [Mosaic Streaming](https://docs.mosaicml.com/projects/streaming/en/stable/), specifically their [MDS Format](https://docs.mosaicml.com/projects/streaming/en/stable/fundamentals/dataset_format.html#mds).
Because the shard files have similar semantics to WebDataset, I'm hoping that adding such support won't be too much trouble?
### Motivation
One of the downsides with WebDataset is a lack of out-of-the-box determinism (especially for large-scale training and reproducibility), easy job resumption, and the ability to quickly debug / visualize individual examples.
Mosaic Streaming provides a [great interface for this out of the box](https://docs.mosaicml.com/projects/streaming/en/stable/#key-features), so I'd love to see it supported in HF Datasets.
### Your contribution
Happy to help test things / provide example data. Can potentially submit a PR if maintainers could point me to the necessary WebDataset logic / steps for adding a new streaming format! | 2024-03-18T15:13:34Z | 6,736 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-03-16T18:42:04Z | https://api.github.com/repos/huggingface/datasets/issues/6736/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6736/timeline | Mosaic Streaming (MDS) Support | https://api.github.com/repos/huggingface/datasets/issues/6736/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/2498509?v=4",
"events_url": "https://api.github.com/users/siddk/events{/privacy}",
"followers_url": "https://api.github.com/users/siddk/followers",
"following_url": "https://api.github.com/users/siddk/following{/other_user}",
"gists_url": "https://api.github.com/users/siddk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/siddk",
"id": 2498509,
"login": "siddk",
"node_id": "MDQ6VXNlcjI0OTg1MDk=",
"organizations_url": "https://api.github.com/users/siddk/orgs",
"received_events_url": "https://api.github.com/users/siddk/received_events",
"repos_url": "https://api.github.com/users/siddk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/siddk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/siddk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/siddk"
} | [] | null | null | NONE | null | null | I_kwDODunzps6Ci4Qu | [
"Hi ! that would be great :) Though note that `datasets` doesn't implement format-specific resuming when streaming, so in general I think it's better if users can use the mosaic-streaming library to read their MDS datasets. I wonder if they support `hf://` paths though...\r\n\r\nAnyway for those interested, the code for WebDataset is a single file here: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/webdataset/webdataset.py.\r\n\r\nIt implements `_split_generators` that downloads files and returns the lists of splits (train/validation/test) and `_split_generators` to generate examples (dicts) from the downloaded files. Streaming is automatically supported by making download steps lazy and by extending `open()` to work with remote URLs."
] | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6736/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6736 | https://github.com/huggingface/datasets/issues/6736 | false |
2,189,132,932 | https://api.github.com/repos/huggingface/datasets/issues/6735/labels{/name} | Fix https://github.com/huggingface/datasets/issues/6675 | 2024-03-18T15:47:48Z | 6,735 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-15T17:21:12Z | https://api.github.com/repos/huggingface/datasets/issues/6735/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6735/timeline | Add `mode` parameter to `Image` feature | https://api.github.com/repos/huggingface/datasets/issues/6735/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2024-03-18T15:41:33Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6735.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6735",
"merged_at": "2024-03-18T15:41:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6735.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6735"
} | PR_kwDODunzps5px84g | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6735). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005009 / 0.011353 (-0.006344) | 0.003547 / 0.011008 (-0.007461) | 0.063014 / 0.038508 (0.024506) | 0.027699 / 0.023109 (0.004589) | 0.247140 / 0.275898 (-0.028758) | 0.273610 / 0.323480 (-0.049870) | 0.003115 / 0.007986 (-0.004871) | 0.002712 / 0.004328 (-0.001616) | 0.049134 / 0.004250 (0.044883) | 0.041582 / 0.037052 (0.004530) | 0.269992 / 0.258489 (0.011503) | 0.294516 / 0.293841 (0.000675) | 0.027818 / 0.128546 (-0.100728) | 0.010568 / 0.075646 (-0.065078) | 0.207710 / 0.419271 (-0.211561) | 0.035767 / 0.043533 (-0.007766) | 0.260058 / 0.255139 (0.004919) | 0.277615 / 0.283200 (-0.005585) | 0.020192 / 0.141683 (-0.121491) | 1.116863 / 1.452155 (-0.335292) | 1.156868 / 1.492716 (-0.335848) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095087 / 0.018006 (0.077081) | 0.303249 / 0.000490 (0.302759) | 0.000215 / 0.000200 (0.000015) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018866 / 0.037411 (-0.018545) | 0.063853 / 0.014526 (0.049328) | 0.073863 / 0.176557 (-0.102693) | 0.121399 / 0.737135 (-0.615737) | 0.076014 / 0.296338 (-0.220325) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289843 / 0.215209 (0.074634) | 2.844085 / 2.077655 (0.766431) | 1.528022 / 1.504120 (0.023902) | 1.397352 / 1.541195 (-0.143843) | 1.394676 / 1.468490 (-0.073814) | 0.555899 / 4.584777 (-4.028878) | 2.354010 / 3.745712 (-1.391702) | 2.737715 / 5.269862 (-2.532146) | 1.731260 / 4.565676 (-2.834416) | 0.062315 / 0.424275 (-0.361960) | 0.004920 / 0.007607 (-0.002687) | 0.342921 / 0.226044 (0.116877) | 3.416529 / 2.268929 (1.147600) | 1.862941 / 55.444624 (-53.581684) | 1.599661 / 6.876477 (-5.276816) | 1.617200 / 2.142072 (-0.524873) | 0.635129 / 4.805227 (-4.170099) | 0.121651 / 6.500664 (-6.379013) | 0.041867 / 0.075469 (-0.033602) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.990825 / 1.841788 (-0.850962) | 11.435576 / 8.074308 (3.361268) | 9.490194 / 10.191392 (-0.701198) | 0.133295 / 0.680424 (-0.547129) | 0.014061 / 0.534201 (-0.520140) | 0.288648 / 0.579283 (-0.290635) | 0.268874 / 0.434364 (-0.165490) | 0.323288 / 0.540337 (-0.217049) | 0.426090 / 1.386936 (-0.960846) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006712 / 0.011353 (-0.004641) | 0.003723 / 0.011008 (-0.007285) | 0.049814 / 0.038508 (0.011306) | 0.039323 / 0.023109 (0.016213) | 0.279244 / 0.275898 (0.003346) | 0.297139 / 0.323480 (-0.026341) | 0.004197 / 0.007986 (-0.003788) | 0.002753 / 0.004328 (-0.001576) | 0.048820 / 0.004250 (0.044569) | 0.049593 / 0.037052 (0.012541) | 0.287247 / 0.258489 (0.028758) | 0.338078 / 0.293841 (0.044237) | 0.029303 / 0.128546 (-0.099243) | 0.010292 / 0.075646 (-0.065354) | 0.057852 / 0.419271 (-0.361419) | 0.053390 / 0.043533 (0.009857) | 0.275155 / 0.255139 (0.020016) | 0.292891 / 0.283200 (0.009692) | 0.020007 / 0.141683 (-0.121676) | 1.161731 / 1.452155 (-0.290424) | 1.232162 / 1.492716 (-0.260555) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092848 / 0.018006 (0.074842) | 0.301180 / 0.000490 (0.300690) | 0.000236 / 0.000200 (0.000036) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022477 / 0.037411 (-0.014934) | 0.077012 / 0.014526 (0.062486) | 0.087335 / 0.176557 (-0.089222) | 0.126761 / 0.737135 (-0.610374) | 0.089249 / 0.296338 (-0.207090) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290722 / 0.215209 (0.075513) | 2.884485 / 2.077655 (0.806830) | 1.565775 / 1.504120 (0.061656) | 1.442369 / 1.541195 (-0.098825) | 1.453995 / 1.468490 (-0.014495) | 0.563193 / 4.584777 (-4.021584) | 2.413610 / 3.745712 (-1.332102) | 2.684567 / 5.269862 (-2.585295) | 1.753322 / 4.565676 (-2.812354) | 0.061879 / 0.424275 (-0.362396) | 0.005080 / 0.007607 (-0.002527) | 0.347274 / 0.226044 (0.121229) | 3.435836 / 2.268929 (1.166907) | 1.937893 / 55.444624 (-53.506731) | 1.657824 / 6.876477 (-5.218653) | 1.777767 / 2.142072 (-0.364305) | 0.656757 / 4.805227 (-4.148471) | 0.117144 / 6.500664 (-6.383520) | 0.040691 / 0.075469 (-0.034778) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.012435 / 1.841788 (-0.829353) | 12.038001 / 8.074308 (3.963693) | 10.363947 / 10.191392 (0.172555) | 0.140711 / 0.680424 (-0.539713) | 0.014937 / 0.534201 (-0.519264) | 0.291070 / 0.579283 (-0.288213) | 0.277180 / 0.434364 (-0.157184) | 0.327433 / 0.540337 (-0.212904) | 0.439767 / 1.386936 (-0.947169) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0b55ec53e980855d71ae22f8b3d12b2a0d476a51 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6735/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6735 | https://github.com/huggingface/datasets/pull/6735 | true |
2,187,646,694 | https://api.github.com/repos/huggingface/datasets/issues/6734/labels{/name} | ### Describe the bug
Mapped tokenization slows down substantially towards end of dataset.
train set started off very slow, caught up to 20k then tapered off til the end.
what's particularly strange is that the tokenization crashed a few times before due to errors with invalid tokens somewhere or corrupted downloads, and the speed ups/downs consistently happened the same times
```bash
Running tokenizer on dataset (num_proc=48): 0%| | 847000/881416735 [12:18<252:45:45, 967.72 examples/s]
Running tokenizer on dataset (num_proc=48): 0%| | 848000/881416735 [12:19<224:16:10, 1090.66 examples/s]
Running tokenizer on dataset (num_proc=48): 10%|▉ | 84964000/881416735 [3:48:00<11:21:34, 19476.01 examples/s]
Running tokenizer on dataset (num_proc=48): 10%|▉ | 84967000/881416735 [3:48:00<12:04:01, 18333.79 examples/s]
Running tokenizer on dataset (num_proc=48): 61%|██████ | 538631977/881416735 [13:46:40<27:50:04, 3420.84 examples/s]
Running tokenizer on dataset (num_proc=48): 61%|██████ | 538632977/881416735 [13:46:40<23:48:20, 3999.77 examples/s]
Running tokenizer on dataset (num_proc=48): 100%|█████████▉| 881365886/881416735 [38:30:19<04:34, 185.10 examples/s]
Running tokenizer on dataset (num_proc=48): 100%|█████████▉| 881366886/881416735 [38:30:25<04:36, 180.57 examples/s]
```
and validation set as well
```bash
Running tokenizer on dataset (num_proc=48): 90%|████████▉ | 41544000/46390354 [28:44<02:37, 30798.76 examples/s]
Running tokenizer on dataset (num_proc=48): 90%|████████▉ | 41550000/46390354 [28:44<02:08, 37698.08 examples/s]
Running tokenizer on dataset (num_proc=48): 96%|█████████▋| 44747422/46390354 [2:15:48<12:22:44, 36.87 examples/s]
Running tokenizer on dataset (num_proc=48): 96%|█████████▋| 44747422/46390354 [2:16:00<12:22:44, 36.87 examples/s]
```
### Steps to reproduce the bug
using the following kwargs
```python
with accelerator.main_process_first():
lm_datasets = tokenized_datasets.map(
group_texts,
batched=True,
num_proc=48
load_from_cache_file=True,
desc=f"Grouping texts in chunks of {block_size}",
)
```
running through slurm script
```bash
#SBATCH --partition=gpu-nvidia-a100
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --gpus-per-task=8
#SBATCH --cpus-per-task=96
```
using this dataset https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T
### Expected behavior
Constant speed throughout
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-5.15.0-1049-aws-x86_64-with-glibc2.10
- Python version: 3.8.18
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.0.3
- `fsspec` version: 2023.10.0 | 2024-03-15T15:27:59Z | 6,734 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-15T03:27:36Z | https://api.github.com/repos/huggingface/datasets/issues/6734/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6734/timeline | Tokenization slows towards end of dataset | https://api.github.com/repos/huggingface/datasets/issues/6734/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/98723285?v=4",
"events_url": "https://api.github.com/users/ethansmith2000/events{/privacy}",
"followers_url": "https://api.github.com/users/ethansmith2000/followers",
"following_url": "https://api.github.com/users/ethansmith2000/following{/other_user}",
"gists_url": "https://api.github.com/users/ethansmith2000/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ethansmith2000",
"id": 98723285,
"login": "ethansmith2000",
"node_id": "U_kgDOBeJl1Q",
"organizations_url": "https://api.github.com/users/ethansmith2000/orgs",
"received_events_url": "https://api.github.com/users/ethansmith2000/received_events",
"repos_url": "https://api.github.com/users/ethansmith2000/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ethansmith2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ethansmith2000/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ethansmith2000"
} | [] | null | null | NONE | null | null | I_kwDODunzps6CZNbm | [
"Hi ! First note that if the dataset is not heterogeneous / shuffled, there might be places in the data with shorter texts that are faster to tokenize.\r\n\r\nMoreover, the way `num_proc` works is by slicing the dataset and passing each slice to a process to run the `map()` function. So at the very end of `map()`, some processes might have finished transforming their slice of data while others are still running, causing the throughput to become lower.",
"I did see some comments about how num_proc=None could help and outputting numpy arrays can also help in the docs, but this seems quite odd now dropping down to 1it/s\r\n\r\n```bash\r\nRunning tokenizer on dataset (num_proc=48): 99%|█████████▉| 46048888/46390354 [12:33:30<4:20:32, 21.84 examples/s]\r\nRunning tokenizer on dataset (num_proc=48): 99%|█████████▉| 46049888/46390354 [12:36:11<8:37:59, 10.95 examples/s]\r\nRunning tokenizer on dataset (num_proc=48): 99%|█████████▉| 46050888/46390354 [12:46:35<24:56:56, 3.78 examples/s]\r\nRunning tokenizer on dataset (num_proc=48): 99%|█████████▉| 46051888/46390354 [12:56:43<35:08:10, 2.68 examples/s]\r\nRunning tokenizer on dataset (num_proc=48): 99%|█████████▉| 46052888/46390354 [13:06:58<42:05:41, 2.23 examples/s]\r\nRunning tokenizer on dataset (num_proc=48): 99%|█████████▉| 46053888/46390354 [13:16:01<44:40:18, 2.09 examples/s]\r\nRunning tokenizer on dataset (num_proc=48): 99%|█████████▉| 46054888/46390354 [13:25:11<46:35:28, 2.00 examples/s]\r\nRunning tokenizer on dataset (num_proc=48): 99%|█████████▉| 46055888/46390354 [13:34:23<47:55:34, 1.94 examples/s]\r\n```\r\n\r\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6734/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6734 | https://github.com/huggingface/datasets/issues/6734 | false |
2,186,811,724 | https://api.github.com/repos/huggingface/datasets/issues/6733/labels{/name} | ### Describe the bug
I am using a cluster that does not have access to the internet when given a job. I tried downloading the dataset using the huggingface-cli command and then loading it with load_dataset but I get an error:
```raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None```
The dataset I'm using is "lmsys/chatbot_arena_conversations". The folder structure is
- README.md
- data
- train-00000-of-00001-cced8514c7ed782a.parquet
### Steps to reproduce the bug
1. Download dataset using HuggingFace CLI: ```huggingface-cli download lmsys/chatbot_arena_conversations --local-dir ./lmsys/chatbot_arena_conversations```
2. In Python
```
from datasets import load_dataset
load_dataset("lmsys/chatbot_arena_conversations")
```
### Expected behavior
Should return a Dataset Dict in the form of
```
DatasetDict({
train: Dataset({
features: [...],
num_rows: 33,000
})
})
```
### Environment info
Python 3.11.5
Datasets 2.18.0
Transformers 4.38.2
Pytorch 2.2.0
Pyarrow 15.0.1
Rocky Linux release 8.9 (Green Obsidian)
| 2024-03-15T18:09:02Z | 6,733 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-14T16:41:27Z | https://api.github.com/repos/huggingface/datasets/issues/6733/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6733/timeline | EmptyDatasetError when loading dataset downloaded with HuggingFace cli | https://api.github.com/repos/huggingface/datasets/issues/6733/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/77196999?v=4",
"events_url": "https://api.github.com/users/StwayneXG/events{/privacy}",
"followers_url": "https://api.github.com/users/StwayneXG/followers",
"following_url": "https://api.github.com/users/StwayneXG/following{/other_user}",
"gists_url": "https://api.github.com/users/StwayneXG/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/StwayneXG",
"id": 77196999,
"login": "StwayneXG",
"node_id": "MDQ6VXNlcjc3MTk2OTk5",
"organizations_url": "https://api.github.com/users/StwayneXG/orgs",
"received_events_url": "https://api.github.com/users/StwayneXG/received_events",
"repos_url": "https://api.github.com/users/StwayneXG/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/StwayneXG/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StwayneXG/subscriptions",
"type": "User",
"url": "https://api.github.com/users/StwayneXG"
} | [] | null | null | NONE | null | null | I_kwDODunzps6CWBlM | [
"Hi! `datasets` is not compatible with `huggingface_hub`'s cache structure, hence the error.\r\n\r\nYou can track https://github.com/huggingface/datasets/issues/5080 to get notified when this is implemented."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6733/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6733 | https://github.com/huggingface/datasets/issues/6733 | false |
2,182,844,673 | https://api.github.com/repos/huggingface/datasets/issues/6731/labels{/name} | ### Describe the bug
### My Code
```
from datasets import load_dataset
res=[]
for i in [0,1]:
di=load_dataset(
"json",
data_files='path_to.json',
split='train',
streaming=True,
).map(lambda x: {"source": i})
res.append(di)
for e in res[0]:
print(e)
```
### Unexpected Behavior
Data in `res[0]` has `source=1`. However the expected value is 0.
### FYI
I further switch `streaming` to `False`. And the output value is as expected (0). So there may exist bugs in setting `streaming=True` in a for loop.
### Environment
Python 3.8.0
datasets==2.18.0
transformers==4.28.1
### Steps to reproduce the bug
1. Create a Json file with any content.
2. Run the provided code.
3. Switch `streaming` to `False` and run again to see the expected behavior.
### Expected behavior
The expected behavior is the data are mapped with its corresponding value in the for loop.
### Environment info
Python 3.8.0
datasets==2.18.0
transformers==4.28.1
Ubuntu 20.04 | 2024-03-14T15:27:02Z | 6,731 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-12T23:26:43Z | https://api.github.com/repos/huggingface/datasets/issues/6731/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6731/timeline | Unexpected behavior when using load_dataset with streaming=True in a for loop | https://api.github.com/repos/huggingface/datasets/issues/6731/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42908296?v=4",
"events_url": "https://api.github.com/users/uApiv/events{/privacy}",
"followers_url": "https://api.github.com/users/uApiv/followers",
"following_url": "https://api.github.com/users/uApiv/following{/other_user}",
"gists_url": "https://api.github.com/users/uApiv/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/uApiv",
"id": 42908296,
"login": "uApiv",
"node_id": "MDQ6VXNlcjQyOTA4Mjk2",
"organizations_url": "https://api.github.com/users/uApiv/orgs",
"received_events_url": "https://api.github.com/users/uApiv/received_events",
"repos_url": "https://api.github.com/users/uApiv/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/uApiv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uApiv/subscriptions",
"type": "User",
"url": "https://api.github.com/users/uApiv"
} | [] | null | null | NONE | null | null | I_kwDODunzps6CG5EB | [
"This is normal behavior in python when using `lambda`: the `i` defined in your `lambda` refers to the global variable `i` in your loop, and `i` equals to `1` when you run your `for e in res[0]` line.\r\n\r\nYou should pass `fn_kwargs` that will be passed to your `lambda` instead of using the global variable:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nres=[]\r\nfor i in [0,1]:\r\n di = load_dataset(\r\n \"json\", \r\n data_files='path_to.json', \r\n split='train',\r\n streaming=True, \r\n ).map(lambda x, source: {\"source\": source}, fn_kwargs={\"source\": i})\r\n\r\n res.append(di)\r\n\r\nfor e in res[0]:\r\n print(e)\r\n```\r\n\r\nThis doesn't happen in non-streaming since in that case `map` is executed while the variable `i` has the right value. In streaming mode, `map` is executed on-the-fly when you iterate on the dataset."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6731/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6731 | https://github.com/huggingface/datasets/issues/6731 | false |
2,181,881,499 | https://api.github.com/repos/huggingface/datasets/issues/6730/labels{/name} | The Pandas packaged builder is undocumented and relies on `pickle` to read the data, making it **unsafe**. Moreover, I haven't seen a single instance of this builder being used (not even using the GH/Hub search), so we should deprecate it. | 2024-03-12T17:42:33Z | 6,730 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-12T15:12:13Z | https://api.github.com/repos/huggingface/datasets/issues/6730/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6730/timeline | Deprecate Pandas builder | https://api.github.com/repos/huggingface/datasets/issues/6730/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2024-03-12T17:36:24Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6730.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6730",
"merged_at": "2024-03-12T17:36:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6730.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6730"
} | PR_kwDODunzps5pZDsB | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6730). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005301 / 0.011353 (-0.006052) | 0.003701 / 0.011008 (-0.007307) | 0.065830 / 0.038508 (0.027322) | 0.029791 / 0.023109 (0.006682) | 0.251676 / 0.275898 (-0.024222) | 0.283824 / 0.323480 (-0.039655) | 0.003083 / 0.007986 (-0.004903) | 0.004144 / 0.004328 (-0.000185) | 0.053670 / 0.004250 (0.049419) | 0.042020 / 0.037052 (0.004968) | 0.266389 / 0.258489 (0.007899) | 0.296740 / 0.293841 (0.002900) | 0.028320 / 0.128546 (-0.100226) | 0.010604 / 0.075646 (-0.065042) | 0.219881 / 0.419271 (-0.199390) | 0.036216 / 0.043533 (-0.007317) | 0.255718 / 0.255139 (0.000579) | 0.275808 / 0.283200 (-0.007392) | 0.018407 / 0.141683 (-0.123276) | 1.140007 / 1.452155 (-0.312148) | 1.174005 / 1.492716 (-0.318711) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091230 / 0.018006 (0.073224) | 0.300704 / 0.000490 (0.300215) | 0.000207 / 0.000200 (0.000007) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018950 / 0.037411 (-0.018461) | 0.062177 / 0.014526 (0.047651) | 0.073968 / 0.176557 (-0.102589) | 0.122161 / 0.737135 (-0.614974) | 0.075001 / 0.296338 (-0.221338) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285675 / 0.215209 (0.070466) | 2.794176 / 2.077655 (0.716522) | 1.478666 / 1.504120 (-0.025454) | 1.361843 / 1.541195 (-0.179351) | 1.383847 / 1.468490 (-0.084643) | 0.568610 / 4.584777 (-4.016167) | 2.402351 / 3.745712 (-1.343361) | 2.860772 / 5.269862 (-2.409089) | 1.768588 / 4.565676 (-2.797089) | 0.063257 / 0.424275 (-0.361018) | 0.004998 / 0.007607 (-0.002609) | 0.340897 / 0.226044 (0.114853) | 3.340238 / 2.268929 (1.071310) | 1.836434 / 55.444624 (-53.608190) | 1.556844 / 6.876477 (-5.319633) | 1.610685 / 2.142072 (-0.531388) | 0.644941 / 4.805227 (-4.160286) | 0.117593 / 6.500664 (-6.383072) | 0.042803 / 0.075469 (-0.032666) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.979181 / 1.841788 (-0.862607) | 11.901365 / 8.074308 (3.827057) | 9.587943 / 10.191392 (-0.603449) | 0.139648 / 0.680424 (-0.540776) | 0.013904 / 0.534201 (-0.520297) | 0.291249 / 0.579283 (-0.288034) | 0.260737 / 0.434364 (-0.173627) | 0.326000 / 0.540337 (-0.214338) | 0.433459 / 1.386936 (-0.953477) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005503 / 0.011353 (-0.005850) | 0.003738 / 0.011008 (-0.007270) | 0.049137 / 0.038508 (0.010629) | 0.031484 / 0.023109 (0.008374) | 0.265783 / 0.275898 (-0.010115) | 0.295125 / 0.323480 (-0.028354) | 0.004074 / 0.007986 (-0.003911) | 0.002707 / 0.004328 (-0.001622) | 0.048340 / 0.004250 (0.044089) | 0.045453 / 0.037052 (0.008401) | 0.276500 / 0.258489 (0.018011) | 0.312002 / 0.293841 (0.018162) | 0.029139 / 0.128546 (-0.099408) | 0.010445 / 0.075646 (-0.065201) | 0.057486 / 0.419271 (-0.361785) | 0.052386 / 0.043533 (0.008853) | 0.267099 / 0.255139 (0.011960) | 0.283193 / 0.283200 (-0.000007) | 0.018368 / 0.141683 (-0.123315) | 1.136207 / 1.452155 (-0.315948) | 1.178418 / 1.492716 (-0.314298) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089270 / 0.018006 (0.071264) | 0.301087 / 0.000490 (0.300598) | 0.000208 / 0.000200 (0.000008) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021991 / 0.037411 (-0.015421) | 0.075357 / 0.014526 (0.060831) | 0.087781 / 0.176557 (-0.088775) | 0.126923 / 0.737135 (-0.610212) | 0.088491 / 0.296338 (-0.207847) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293653 / 0.215209 (0.078444) | 2.872156 / 2.077655 (0.794501) | 1.559229 / 1.504120 (0.055109) | 1.441201 / 1.541195 (-0.099993) | 1.472642 / 1.468490 (0.004152) | 0.588463 / 4.584777 (-3.996314) | 2.447685 / 3.745712 (-1.298028) | 2.755752 / 5.269862 (-2.514110) | 1.796591 / 4.565676 (-2.769086) | 0.068024 / 0.424275 (-0.356252) | 0.005148 / 0.007607 (-0.002459) | 0.343572 / 0.226044 (0.117528) | 3.347856 / 2.268929 (1.078927) | 1.945977 / 55.444624 (-53.498647) | 1.648953 / 6.876477 (-5.227524) | 1.804468 / 2.142072 (-0.337604) | 0.651034 / 4.805227 (-4.154193) | 0.118130 / 6.500664 (-6.382534) | 0.041019 / 0.075469 (-0.034450) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.020461 / 1.841788 (-0.821327) | 12.514237 / 8.074308 (4.439929) | 10.696276 / 10.191392 (0.504884) | 0.154549 / 0.680424 (-0.525874) | 0.015964 / 0.534201 (-0.518237) | 0.290392 / 0.579283 (-0.288891) | 0.276074 / 0.434364 (-0.158290) | 0.326253 / 0.540337 (-0.214085) | 0.440383 / 1.386936 (-0.946553) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#29ffc270da34de70cf8e28b2ebeadba1c06d8730 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6730/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6730 | https://github.com/huggingface/datasets/pull/6730 | true |
2,180,237,159 | https://api.github.com/repos/huggingface/datasets/issues/6729/labels{/name} | See https://huggingface.co/datasets/PhilEO-community/PhilEO-downstream
The dataset viewer gives the following error:
```
Error code: ConfigNamesError
Exception: BadZipFile
Message: zipfiles that span multiple disks are not supported
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 67, in compute_config_names_response
get_dataset_config_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1871, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1846, in dataset_module_factory
return HubDatasetModuleFactoryWithoutScript(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1240, in get_module
module_name, default_builder_kwargs = infer_module_for_data_files(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 584, in infer_module_for_data_files
split_modules = {
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 585, in <dictcomp>
split: infer_module_for_data_files_list(data_files_list, download_config=download_config)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 526, in infer_module_for_data_files_list
return infer_module_for_data_files_list_in_archives(data_files_list, download_config=download_config)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 554, in infer_module_for_data_files_list_in_archives
for f in xglob(extracted, recursive=True, download_config=download_config)[
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 576, in xglob
fs, *_ = fsspec.get_fs_token_paths(urlpath, storage_options=storage_options)
File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 622, in get_fs_token_paths
fs = filesystem(protocol, **inkwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 290, in filesystem
return cls(**storage_options)
File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 79, in __call__
obj = super().__call__(*args, **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py", line 57, in __init__
self.zip = zipfile.ZipFile(
File "/usr/local/lib/python3.9/zipfile.py", line 1266, in __init__
self._RealGetContents()
File "/usr/local/lib/python3.9/zipfile.py", line 1329, in _RealGetContents
endrec = _EndRecData(fp)
File "/usr/local/lib/python3.9/zipfile.py", line 286, in _EndRecData
return _EndRecData64(fpin, -sizeEndCentDir, endrec)
File "/usr/local/lib/python3.9/zipfile.py", line 232, in _EndRecData64
raise BadZipFile("zipfiles that span multiple disks are not supported")
zipfile.BadZipFile: zipfiles that span multiple disks are not supported
```
The files (https://huggingface.co/datasets/PhilEO-community/PhilEO-downstream/tree/main/data) are:
<img width="629" alt="Capture d’écran 2024-03-11 à 22 07 30" src="https://github.com/huggingface/datasets/assets/1676121/0bb15a51-d54f-4d73-8572-e427ea644b36">
| 2024-03-11T21:07:46Z | 6,729 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | 2024-03-11T21:07:41Z | https://api.github.com/repos/huggingface/datasets/issues/6729/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6729/timeline | Support zipfiles that span multiple disks? | https://api.github.com/repos/huggingface/datasets/issues/6729/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [] | null | null | CONTRIBUTOR | null | null | I_kwDODunzps6B88dn | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6729/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6729 | https://github.com/huggingface/datasets/issues/6729 | false |
2,178,607,012 | https://api.github.com/repos/huggingface/datasets/issues/6728/labels{/name} | ### Describe the bug
This bug is triggered under the following conditions:
- datasets repo ids without organization names trigger errors, such as `bookcorpus`, `gsm8k`, `wikipedia`, rather than in the form of `A/B`.
- If `HF_ENDPOINT` is set and the hostname is not in the form of `(hub-ci.)?huggingface.co`.
- This issue occurs with `datasets>2.15.0` or `huggingface-hub>0.19.4`. For example, using the latest versions: `datasets==2.18.0` and `huggingface-hub==0.21.4`,
### Steps to reproduce the bug
the issue can be reproduced with the following code:
1. install specific datasets and huggingface_hub.
```bash
pip install datasets==2.18.0
pip install huggingface_hub==0.21.4
```
2. execute python code.
```Python
import os
os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com'
from datasets import load_dataset
bookcorpus = load_dataset('bookcorpus', split='train')
```
console output:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/padeoe/.local/lib/python3.10/site-packages/datasets/load.py", line 2556, in load_dataset
builder_instance = load_dataset_builder(
File "/home/padeoe/.local/lib/python3.10/site-packages/datasets/load.py", line 2228, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/padeoe/.local/lib/python3.10/site-packages/datasets/load.py", line 1879, in dataset_module_factory
raise e1 from None
File "/home/padeoe/.local/lib/python3.10/site-packages/datasets/load.py", line 1830, in dataset_module_factory
with fs.open(f"datasets/{path}/{filename}", "r", encoding="utf-8") as f:
File "/home/padeoe/.local/lib/python3.10/site-packages/fsspec/spec.py", line 1295, in open
self.open(
File "/home/padeoe/.local/lib/python3.10/site-packages/fsspec/spec.py", line 1307, in open
f = self._open(
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 228, in _open
return HfFileSystemFile(self, path, mode=mode, revision=revision, block_size=block_size, **kwargs)
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 615, in __init__
self.resolved_path = fs.resolve_path(path, revision=revision)
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 180, in resolve_path
repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision)
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 117, in _repo_and_revision_exist
self._api.repo_info(repo_id, revision=revision, repo_type=repo_type)
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2413, in repo_info
return method(
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2286, in dataset_info
hf_raise_for_status(r)
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 362, in hf_raise_for_status
raise HfHubHTTPError(str(e), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: 401 Client Error: Unauthorized for url: https://hf-mirror.com/api/datasets/bookcorpus/bookcorpus.py (Request ID: Root=1-65ee8659-5ab10eec5960c63e71f2bb58;b00bdbea-fd6e-4a74-8fe0-bc4682ae090e)
```
### Expected behavior
The dataset was downloaded correctly without any errors.
### Environment info
datasets==2.18.0
huggingface-hub==0.21.4 | 2024-03-15T14:52:07Z | 6,728 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-11T09:06:38Z | https://api.github.com/repos/huggingface/datasets/issues/6728/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6728/timeline | Issue Downloading Certain Datasets After Setting Custom `HF_ENDPOINT` | https://api.github.com/repos/huggingface/datasets/issues/6728/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/10057041?v=4",
"events_url": "https://api.github.com/users/padeoe/events{/privacy}",
"followers_url": "https://api.github.com/users/padeoe/followers",
"following_url": "https://api.github.com/users/padeoe/following{/other_user}",
"gists_url": "https://api.github.com/users/padeoe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/padeoe",
"id": 10057041,
"login": "padeoe",
"node_id": "MDQ6VXNlcjEwMDU3MDQx",
"organizations_url": "https://api.github.com/users/padeoe/orgs",
"received_events_url": "https://api.github.com/users/padeoe/received_events",
"repos_url": "https://api.github.com/users/padeoe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/padeoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padeoe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/padeoe"
} | [] | null | completed | NONE | 2024-03-15T14:52:07Z | null | I_kwDODunzps6B2uek | [
"Through debugging, I found a potential solution is to modify the code in the error handling module of `huggingface_hub`: https://github.com/huggingface/huggingface_hub/commit/56d6c798c44e83d2a3167e74c022737d8fcbe822 ",
"@Wauplin ",
"Thanks for investigating and reporting the bug @padeoe! I've opened a PR in `huggingface_hub` with your suggested fix! :) https://github.com/huggingface/huggingface_hub/pull/2119"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6728/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6728 | https://github.com/huggingface/datasets/issues/6728 | false |
2,177,826,110 | https://api.github.com/repos/huggingface/datasets/issues/6727/labels{/name} | Hello,
When working with bio-data, each feature often has metadata associated with it (e.g. species, lineage, snp position, etc). To store this, I like to use the feature classes with the added `metadata` attribute. However, when saving or loading with custom features, you get an error since that class doesn't exist in the global namespace in `datasets.features.features`. Take for example,
```python
from dataclasses import dataclass, field
from datasets import Dataset
from datasets.features.features import Value, Features
@dataclass
class FeatureA(Value):
metadata: dict = field(default=dict)
_type: str = field(default="FeatureA", init=False, repr=False)
@dataclass
class FeatureB(Value):
metadata: dict = field(default_factory=dict)
_type: str = field(default="FeatureB", init=False, repr=False)
test_data = {
"a": [1, 2, 3],
"b": [4, 5, 6],
}
test_data = Dataset.from_dict(
test_data,
features=Features({
"a": FeatureA("int32", metadata={"species": "lactobacillus acetotolerans"}),
"b": FeatureB("int32", metadata={"species": "lactobacillus iners"}),
})
)
# returns an error since FeatureA and FeatureB are not in the global namespace
test_data.save_to_disk('./test_data')
```
Saving the dataset (0/1 shards): 0%| | 0/3 [00:00<?, ? examples/s]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[2], line 28
19 test_data = Dataset.from_dict(
20 test_data,
21 features=Features({
(...)
24 })
25 )
27 # returns an error since FeatureA and FeatureB are not in the global namespace
---> 28 test_data.save_to_disk('./test_data')
...
File ~\Documents\datasets\src\datasets\features\features.py:1361, in generate_from_dict(obj)
1359 return {key: generate_from_dict(value) for key, value in obj.items()}
1360 obj = dict(obj)
-> 1361 class_type = globals()[obj.pop("_type")]
1363 if class_type == Sequence:
1364 return Sequence(feature=generate_from_dict(obj["feature"]), length=obj.get("length", -1))
KeyError: 'FeatureA'
We can avoid this by having a registry (like formatters) and doing
```python
from datasets.features.features import register_feature
register_feature(FeatureA, "FeatureA")
register_feature(FeatureB, "FeatureB")
test_data.save_to_disk('./test_data')
```
Saving the dataset (1/1 shards): 100%|------| 3/3 [00:00<00:00, 211.13 examples/s]
and loading from disk returns with all metadata information
```python
from datasets import load_from_disk
test_data = load_from_disk('./test_data')
test_data.features
```
{'a': FeatureA(dtype='int32', id=None, metadata={'species': 'lactobacillus acetotolerans'}),
'b': FeatureB(dtype='int32', id=None, metadata={'species': 'lactobacillus iners'})}
| 2024-03-13T12:08:49Z | 6,727 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-10T17:47:51Z | https://api.github.com/repos/huggingface/datasets/issues/6727/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6727/timeline | Using a registry instead of calling globals for fetching feature types | https://api.github.com/repos/huggingface/datasets/issues/6727/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/11325244?v=4",
"events_url": "https://api.github.com/users/psmyth94/events{/privacy}",
"followers_url": "https://api.github.com/users/psmyth94/followers",
"following_url": "https://api.github.com/users/psmyth94/following{/other_user}",
"gists_url": "https://api.github.com/users/psmyth94/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/psmyth94",
"id": 11325244,
"login": "psmyth94",
"node_id": "MDQ6VXNlcjExMzI1MjQ0",
"organizations_url": "https://api.github.com/users/psmyth94/orgs",
"received_events_url": "https://api.github.com/users/psmyth94/received_events",
"repos_url": "https://api.github.com/users/psmyth94/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/psmyth94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/psmyth94/subscriptions",
"type": "User",
"url": "https://api.github.com/users/psmyth94"
} | [] | null | null | CONTRIBUTOR | 2024-03-13T10:46:02Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6727.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6727",
"merged_at": "2024-03-13T10:46:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6727.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6727"
} | PR_kwDODunzps5pLJyE | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6727). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"looks like some files are missing in your google storage",
"cc @mariosasko is it related to https://github.com/huggingface/datasets/pull/6474 ? The files should ideally not move for backward compatibility anyway",
"@lhoestq All the files are still there.\r\n\r\nThe problem is that the `natural_questions` is now a no-code dataset, so the test's paths are no longer correct (unless the revision is pinned to the previous version). \r\n\r\n@psmyth94 This has been fixed on `main`, so you can make the CI tests green with the following:\r\n```python\r\ngit remote add upstream https://github.com/huggingface/datasets.git\r\ngit pull upstream main\r\ngit push\r\n```",
"Thank you @mariosasko ! I'm updating this branch if you don't mind @psmyth94 ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004903 / 0.011353 (-0.006450) | 0.003105 / 0.011008 (-0.007903) | 0.061980 / 0.038508 (0.023471) | 0.029726 / 0.023109 (0.006617) | 0.243406 / 0.275898 (-0.032492) | 0.262530 / 0.323480 (-0.060950) | 0.003905 / 0.007986 (-0.004081) | 0.002617 / 0.004328 (-0.001712) | 0.047851 / 0.004250 (0.043601) | 0.040397 / 0.037052 (0.003345) | 0.259461 / 0.258489 (0.000972) | 0.285059 / 0.293841 (-0.008782) | 0.027321 / 0.128546 (-0.101225) | 0.009876 / 0.075646 (-0.065770) | 0.206999 / 0.419271 (-0.212273) | 0.034906 / 0.043533 (-0.008626) | 0.245120 / 0.255139 (-0.010019) | 0.270490 / 0.283200 (-0.012710) | 0.017341 / 0.141683 (-0.124342) | 1.128182 / 1.452155 (-0.323973) | 1.173024 / 1.492716 (-0.319693) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089337 / 0.018006 (0.071331) | 0.298256 / 0.000490 (0.297767) | 0.000216 / 0.000200 (0.000016) | 0.000047 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018179 / 0.037411 (-0.019233) | 0.061275 / 0.014526 (0.046749) | 0.073137 / 0.176557 (-0.103419) | 0.119603 / 0.737135 (-0.617532) | 0.073969 / 0.296338 (-0.222370) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283109 / 0.215209 (0.067900) | 2.765441 / 2.077655 (0.687787) | 1.471276 / 1.504120 (-0.032844) | 1.346365 / 1.541195 (-0.194830) | 1.360668 / 1.468490 (-0.107822) | 0.549947 / 4.584777 (-4.034830) | 2.344213 / 3.745712 (-1.401499) | 2.700905 / 5.269862 (-2.568956) | 1.689936 / 4.565676 (-2.875741) | 0.061985 / 0.424275 (-0.362290) | 0.004923 / 0.007607 (-0.002684) | 0.329833 / 0.226044 (0.103788) | 3.277580 / 2.268929 (1.008652) | 1.833987 / 55.444624 (-53.610638) | 1.571023 / 6.876477 (-5.305454) | 1.573259 / 2.142072 (-0.568813) | 0.627504 / 4.805227 (-4.177723) | 0.114106 / 6.500664 (-6.386558) | 0.041197 / 0.075469 (-0.034272) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.967400 / 1.841788 (-0.874388) | 11.046527 / 8.074308 (2.972219) | 9.542214 / 10.191392 (-0.649178) | 0.140745 / 0.680424 (-0.539679) | 0.013627 / 0.534201 (-0.520574) | 0.288429 / 0.579283 (-0.290855) | 0.260509 / 0.434364 (-0.173855) | 0.324704 / 0.540337 (-0.215633) | 0.419366 / 1.386936 (-0.967570) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005123 / 0.011353 (-0.006230) | 0.003119 / 0.011008 (-0.007890) | 0.048931 / 0.038508 (0.010423) | 0.032067 / 0.023109 (0.008958) | 0.276825 / 0.275898 (0.000927) | 0.297589 / 0.323480 (-0.025890) | 0.004075 / 0.007986 (-0.003911) | 0.002579 / 0.004328 (-0.001750) | 0.047862 / 0.004250 (0.043612) | 0.044032 / 0.037052 (0.006980) | 0.289469 / 0.258489 (0.030980) | 0.327269 / 0.293841 (0.033428) | 0.029369 / 0.128546 (-0.099177) | 0.010180 / 0.075646 (-0.065466) | 0.057111 / 0.419271 (-0.362161) | 0.051046 / 0.043533 (0.007513) | 0.276758 / 0.255139 (0.021619) | 0.296084 / 0.283200 (0.012884) | 0.017376 / 0.141683 (-0.124306) | 1.154486 / 1.452155 (-0.297669) | 1.192699 / 1.492716 (-0.300018) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.085981 / 0.018006 (0.067974) | 0.296956 / 0.000490 (0.296466) | 0.000211 / 0.000200 (0.000011) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021239 / 0.037411 (-0.016172) | 0.074851 / 0.014526 (0.060326) | 0.085676 / 0.176557 (-0.090881) | 0.125876 / 0.737135 (-0.611259) | 0.087573 / 0.296338 (-0.208765) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289220 / 0.215209 (0.074011) | 2.812342 / 2.077655 (0.734688) | 1.572886 / 1.504120 (0.068766) | 1.446442 / 1.541195 (-0.094752) | 1.458737 / 1.468490 (-0.009753) | 0.562010 / 4.584777 (-4.022767) | 2.422896 / 3.745712 (-1.322816) | 2.578408 / 5.269862 (-2.691454) | 1.689998 / 4.565676 (-2.875678) | 0.064782 / 0.424275 (-0.359493) | 0.005051 / 0.007607 (-0.002556) | 0.339982 / 0.226044 (0.113938) | 3.309882 / 2.268929 (1.040953) | 1.910273 / 55.444624 (-53.534351) | 1.649723 / 6.876477 (-5.226753) | 1.744073 / 2.142072 (-0.397999) | 0.651905 / 4.805227 (-4.153323) | 0.114606 / 6.500664 (-6.386058) | 0.040030 / 0.075469 (-0.035439) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.008374 / 1.841788 (-0.833414) | 11.547300 / 8.074308 (3.472992) | 9.966061 / 10.191392 (-0.225331) | 0.144874 / 0.680424 (-0.535550) | 0.014400 / 0.534201 (-0.519801) | 0.285435 / 0.579283 (-0.293848) | 0.274755 / 0.434364 (-0.159609) | 0.323105 / 0.540337 (-0.217232) | 0.439172 / 1.386936 (-0.947764) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4591ac120e9d6c082b2479d2005c04b9c36f539c \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6727/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6727 | https://github.com/huggingface/datasets/pull/6727 | true |
2,177,097,232 | https://api.github.com/repos/huggingface/datasets/issues/6726/labels{/name} | ### Describe the bug
# Let's make it faster
First, an evidence...
![image](https://github.com/huggingface/datasets/assets/159512661/a703a82c-43a0-426c-9d99-24c563d70965)
Figure 1: CProfile for loading 3 files from cerebras/SlimPajama-627B train split, and 3 files from test split using streaming=True. X axis is 1106 seconds long.
See? It's pretty slow.
What is resolve pattern doing?
```
resolve_pattern called with **/train/** and hf://datasets/cerebras/SlimPajama-627B@2d0accdd58c5d5511943ca1f5ff0e3eb5e293543
resolve_pattern took 20.815081119537354 seconds
```
Makes sense. How to improve it?
## Bigger project, biggest payoff
Databricks (and consequently, spark) store a compressed manifest file of the files contained in the remote filesystem.
Then, you download one tiny file, decompress it, and all the operations are local instead of this shenanigans.
It seems pretty straightforward to make dataset uploads compute a manifest and upload it alongside their data.
This would make resolution time so fast that nobody would ever think about it again.
It also means you either need to have the uploader compute it _every time_, or have a hook that computes it.
## Smaller project, immediate payoff: Be diligent in avoiding deepcopy
Revise the _ls_tree method to avoid deepcopy:
```
def _ls_tree(
self,
path: str,
recursive: bool = False,
refresh: bool = False,
revision: Optional[str] = None,
expand_info: bool = True,
):
..... omitted .....
for path_info in tree:
if isinstance(path_info, RepoFile):
cache_path_info = {
"name": root_path + "/" + path_info.path,
"size": path_info.size,
"type": "file",
"blob_id": path_info.blob_id,
"lfs": path_info.lfs,
"last_commit": path_info.last_commit,
"security": path_info.security,
}
else:
cache_path_info = {
"name": root_path + "/" + path_info.path,
"size": 0,
"type": "directory",
"tree_id": path_info.tree_id,
"last_commit": path_info.last_commit,
}
parent_path = self._parent(cache_path_info["name"])
self.dircache.setdefault(parent_path, []).append(cache_path_info)
out.append(cache_path_info)
return copy.deepcopy(out) # copy to not let users modify the dircache
```
Observe this deepcopy at the end. It is making a copy of a very simple data structure. We do not need to copy. We can simply generate the data structure twice instead. It will be much faster.
```
def _ls_tree(
self,
path: str,
recursive: bool = False,
refresh: bool = False,
revision: Optional[str] = None,
expand_info: bool = True,
):
..... omitted .....
def make_cache_path_info(path_info):
if isinstance(path_info, RepoFile):
return {
"name": root_path + "/" + path_info.path,
"size": path_info.size,
"type": "file",
"blob_id": path_info.blob_id,
"lfs": path_info.lfs,
"last_commit": path_info.last_commit,
"security": path_info.security,
}
else:
return {
"name": root_path + "/" + path_info.path,
"size": 0,
"type": "directory",
"tree_id": path_info.tree_id,
"last_commit": path_info.last_commit,
}
for path_info in tree:
cache_path_info = make_cache_path_info(path_info)
out_cache_path_info = make_cache_path_info(path_info) # copy to not let users modify the dircache
parent_path = self._parent(cache_path_info["name"])
self.dircache.setdefault(parent_path, []).append(cache_path_info)
out.append(out_cache_path_info)
return out
```
Note there is no longer a deepcopy in this method. We have replaced it with generating the output twice. This is substantially faster. For me, the entire resolution went from 1100s to 360s.
## Medium project, medium payoff
After the above change, we have this profile:
![image](https://github.com/huggingface/datasets/assets/159512661/db7b83da-2dfc-4c2e-abab-0ede9477876c)
Figure 2: x-axis is 355 seconds. Note that globbing and _ls_tree deep copy is gone. No surprise there. It's much faster now, but we still spend ~187seconds in get_fs_token_paths.
Well get_fs_token_paths is part of fsspec. We don't need to fix that because we can trust their developers to write high performance code. Probably the caller has misconfigured something. Let's take a look at the storage_options being provided to the filesystem that is constructed during this call.
Ah yes, streaming_download_manager::_prepare_single_hop_path_and_storage_options. We know streaming download manager is not compatible with async right now, but we really need this specific part of the code to be async. We're spending so much time checking isDir on the remote filesystem, it's a huge waste.
We can make the call easily 20-30x faster by using async, removing this performance bottleneck almost entirely (and reducing the total time of this part of the code to <30s. There is no reason to block async isDir calls for streaming.
I'm not going to mess w/ this one myself; I didn't write the streaming impl, and I don't know how it works, but I know the isDir check can be async.
### Steps to reproduce the bug
```
with cProfile.Profile() as pr:
pr.enable()
# Begin Data
if not os.path.exists(data_cache_dir):
os.makedirs(data_cache_dir, exist_ok=True)
training_dataset = load_dataset(training_dataset_name, split=training_split, cache_dir=data_cache_dir, streaming=True).take(training_slice)
eval_dataset = load_dataset(eval_dataset_name, split=eval_split, cache_dir=data_cache_dir, streaming=True).take(eval_slice)
# End Data
pr.disable()
pr.create_stats()
if not os.path.exists(profiling_path):
os.makedirs(profiling_path, exist_ok=True)
pr.dump_stats(os.path.join(profiling_path, "cprofile.prof"))
```
run this code for "cerebras/SlimPajama-627B" and whatever other params
### Expected behavior
Something better.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.13
- `huggingface_hub` version: 0.21.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0 | 2024-03-09T07:11:08Z | 6,726 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-09T07:08:45Z | https://api.github.com/repos/huggingface/datasets/issues/6726/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6726/timeline | Profiling for HF Filesystem shows there are easy performance gains to be made | https://api.github.com/repos/huggingface/datasets/issues/6726/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/159512661?v=4",
"events_url": "https://api.github.com/users/awgr/events{/privacy}",
"followers_url": "https://api.github.com/users/awgr/followers",
"following_url": "https://api.github.com/users/awgr/following{/other_user}",
"gists_url": "https://api.github.com/users/awgr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/awgr",
"id": 159512661,
"login": "awgr",
"node_id": "U_kgDOCYH4VQ",
"organizations_url": "https://api.github.com/users/awgr/orgs",
"received_events_url": "https://api.github.com/users/awgr/received_events",
"repos_url": "https://api.github.com/users/awgr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/awgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/awgr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/awgr"
} | [] | null | null | NONE | null | null | I_kwDODunzps6Bw94Q | [
"FWIW I debugged this while waiting for it to go",
"Oh I forgot to mention you can also cache resolve_pattern, and that seemed to also substantially improves things, if you want to load a dataset twice for whatever reason."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6726/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6726 | https://github.com/huggingface/datasets/issues/6726 | false |
2,175,527,530 | https://api.github.com/repos/huggingface/datasets/issues/6725/labels{/name} | ### Feature request
Request for a comparison of huggingface datasets compared with other data format especially webdataset
### Motivation
I see huggingface datasets uses Apache Arrow as its backend, it seems to be great, but I'm curious about how it is good compared with other dataset format, like webdataset, what's the pros/cons of them.
### Your contribution
More information | 2024-03-08T08:23:01Z | 6,725 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-03-08T08:23:01Z | https://api.github.com/repos/huggingface/datasets/issues/6725/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6725/timeline | Request for a comparison of huggingface datasets compared with other data format especially webdataset | https://api.github.com/repos/huggingface/datasets/issues/6725/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/20135317?v=4",
"events_url": "https://api.github.com/users/Luciennnnnnn/events{/privacy}",
"followers_url": "https://api.github.com/users/Luciennnnnnn/followers",
"following_url": "https://api.github.com/users/Luciennnnnnn/following{/other_user}",
"gists_url": "https://api.github.com/users/Luciennnnnnn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Luciennnnnnn",
"id": 20135317,
"login": "Luciennnnnnn",
"node_id": "MDQ6VXNlcjIwMTM1MzE3",
"organizations_url": "https://api.github.com/users/Luciennnnnnn/orgs",
"received_events_url": "https://api.github.com/users/Luciennnnnnn/received_events",
"repos_url": "https://api.github.com/users/Luciennnnnnn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Luciennnnnnn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Luciennnnnnn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Luciennnnnnn"
} | [] | null | null | NONE | null | null | I_kwDODunzps6Bq-pq | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6725/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6725 | https://github.com/huggingface/datasets/issues/6725 | false |
2,174,398,227 | https://api.github.com/repos/huggingface/datasets/issues/6724/labels{/name} | ### Describe the bug
My data repository was first called `BramVanroy/hplt-mono-v1-2` but I then renamed to use underscores instead of dashes. However, it seems that `datasets` retrieves the old repo name when it checks whether the repo contains data loading scripts in this line.
https://github.com/huggingface/datasets/blob/6fb6c834f008996c994b0a86c3808d0a33d44525/src/datasets/load.py#L1845
When I print `filename` it returns `hplt-mono-v1-2.py` but the files in the repo are of course `['.gitattributes', 'README.md', 'hplt_mono_v1_2.py']`. So the `filename` is the original reponame instead of the renamed one.
I am not sure if this is a caching issue or not or how I can resolve it.
### Steps to reproduce the bug
```
from datasets import load_dataset
ds = load_dataset(
"BramVanroy/hplt-mono-v1-2",
"ky",
trust_remote_code=True
)
```
### Expected behavior
That the most recent repo name is used when `filename` is generated.
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.34
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.2
- PyArrow version: 14.0.1
- Pandas version: 2.1.3
- `fsspec` version: 2023.10.0
| 2024-03-07T20:06:25Z | 6,724 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-07T17:38:38Z | https://api.github.com/repos/huggingface/datasets/issues/6724/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6724/timeline | Dataset with loading script does not work in renamed repos | https://api.github.com/repos/huggingface/datasets/issues/6724/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy"
} | [] | null | null | CONTRIBUTOR | null | null | I_kwDODunzps6Bmq8T | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6724/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6724 | https://github.com/huggingface/datasets/issues/6724 | false |
2,174,344,456 | https://api.github.com/repos/huggingface/datasets/issues/6723/labels{/name} | fix https://github.com/huggingface/datasets/pull/6722 | 2024-03-07T17:27:29Z | 6,723 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-07T17:09:29Z | https://api.github.com/repos/huggingface/datasets/issues/6723/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6723/timeline | get_dataset_default_config_name docstring | https://api.github.com/repos/huggingface/datasets/issues/6723/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2024-03-07T17:21:20Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6723.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6723",
"merged_at": "2024-03-07T17:21:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6723.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6723"
} | PR_kwDODunzps5o_fPU | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6723). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005658 / 0.011353 (-0.005694) | 0.003883 / 0.011008 (-0.007125) | 0.064007 / 0.038508 (0.025499) | 0.030370 / 0.023109 (0.007261) | 0.246677 / 0.275898 (-0.029221) | 0.270846 / 0.323480 (-0.052634) | 0.003102 / 0.007986 (-0.004884) | 0.002931 / 0.004328 (-0.001397) | 0.049446 / 0.004250 (0.045196) | 0.043555 / 0.037052 (0.006503) | 0.261810 / 0.258489 (0.003321) | 0.289705 / 0.293841 (-0.004136) | 0.028676 / 0.128546 (-0.099870) | 0.010778 / 0.075646 (-0.064868) | 0.210604 / 0.419271 (-0.208667) | 0.035987 / 0.043533 (-0.007546) | 0.248034 / 0.255139 (-0.007105) | 0.265019 / 0.283200 (-0.018181) | 0.018522 / 0.141683 (-0.123161) | 1.096364 / 1.452155 (-0.355791) | 1.152750 / 1.492716 (-0.339966) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093987 / 0.018006 (0.075981) | 0.306143 / 0.000490 (0.305653) | 0.000218 / 0.000200 (0.000018) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018727 / 0.037411 (-0.018685) | 0.061983 / 0.014526 (0.047457) | 0.074254 / 0.176557 (-0.102303) | 0.121256 / 0.737135 (-0.615880) | 0.076756 / 0.296338 (-0.219582) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278824 / 0.215209 (0.063615) | 2.815960 / 2.077655 (0.738305) | 1.472946 / 1.504120 (-0.031174) | 1.349722 / 1.541195 (-0.191473) | 1.327844 / 1.468490 (-0.140646) | 0.574964 / 4.584777 (-4.009813) | 2.403458 / 3.745712 (-1.342254) | 2.769293 / 5.269862 (-2.500569) | 1.736970 / 4.565676 (-2.828706) | 0.063144 / 0.424275 (-0.361131) | 0.004983 / 0.007607 (-0.002625) | 0.331212 / 0.226044 (0.105168) | 3.231496 / 2.268929 (0.962567) | 1.798487 / 55.444624 (-53.646138) | 1.523010 / 6.876477 (-5.353467) | 1.559973 / 2.142072 (-0.582099) | 0.657036 / 4.805227 (-4.148191) | 0.119084 / 6.500664 (-6.381580) | 0.042982 / 0.075469 (-0.032487) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976433 / 1.841788 (-0.865355) | 11.475946 / 8.074308 (3.401638) | 9.339369 / 10.191392 (-0.852023) | 0.141761 / 0.680424 (-0.538662) | 0.014506 / 0.534201 (-0.519695) | 0.289944 / 0.579283 (-0.289340) | 0.273667 / 0.434364 (-0.160697) | 0.326682 / 0.540337 (-0.213655) | 0.458946 / 1.386936 (-0.927990) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005194 / 0.011353 (-0.006159) | 0.003713 / 0.011008 (-0.007295) | 0.049297 / 0.038508 (0.010789) | 0.029723 / 0.023109 (0.006614) | 0.278664 / 0.275898 (0.002766) | 0.296387 / 0.323480 (-0.027093) | 0.004215 / 0.007986 (-0.003771) | 0.002680 / 0.004328 (-0.001648) | 0.048276 / 0.004250 (0.044025) | 0.044454 / 0.037052 (0.007402) | 0.290510 / 0.258489 (0.032021) | 0.319028 / 0.293841 (0.025187) | 0.029177 / 0.128546 (-0.099369) | 0.010361 / 0.075646 (-0.065285) | 0.056993 / 0.419271 (-0.362279) | 0.050765 / 0.043533 (0.007232) | 0.278234 / 0.255139 (0.023095) | 0.295848 / 0.283200 (0.012649) | 0.018776 / 0.141683 (-0.122906) | 1.134866 / 1.452155 (-0.317288) | 1.204083 / 1.492716 (-0.288634) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094397 / 0.018006 (0.076391) | 0.304693 / 0.000490 (0.304203) | 0.000207 / 0.000200 (0.000007) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021322 / 0.037411 (-0.016090) | 0.075384 / 0.014526 (0.060859) | 0.086961 / 0.176557 (-0.089596) | 0.124424 / 0.737135 (-0.612711) | 0.087802 / 0.296338 (-0.208536) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.305542 / 0.215209 (0.090333) | 2.980678 / 2.077655 (0.903023) | 1.632348 / 1.504120 (0.128228) | 1.501466 / 1.541195 (-0.039728) | 1.517681 / 1.468490 (0.049191) | 0.579318 / 4.584777 (-4.005459) | 2.460734 / 3.745712 (-1.284978) | 2.650164 / 5.269862 (-2.619697) | 1.752061 / 4.565676 (-2.813615) | 0.064561 / 0.424275 (-0.359714) | 0.005097 / 0.007607 (-0.002510) | 0.359613 / 0.226044 (0.133569) | 3.518549 / 2.268929 (1.249620) | 1.962575 / 55.444624 (-53.482050) | 1.686108 / 6.876477 (-5.190369) | 1.787873 / 2.142072 (-0.354199) | 0.653715 / 4.805227 (-4.151512) | 0.117617 / 6.500664 (-6.383048) | 0.040359 / 0.075469 (-0.035110) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.021533 / 1.841788 (-0.820255) | 11.974817 / 8.074308 (3.900509) | 10.073530 / 10.191392 (-0.117862) | 0.141477 / 0.680424 (-0.538947) | 0.015081 / 0.534201 (-0.519120) | 0.292622 / 0.579283 (-0.286661) | 0.291043 / 0.434364 (-0.143321) | 0.347822 / 0.540337 (-0.192516) | 0.443647 / 1.386936 (-0.943289) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6fb6c834f008996c994b0a86c3808d0a33d44525 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6723/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6723 | https://github.com/huggingface/datasets/pull/6723 | true |
2,174,332,127 | https://api.github.com/repos/huggingface/datasets/issues/6722/labels{/name} | see https://github.com/huggingface/datasets-server/pull/2554#discussion_r1516516867 | 2024-03-07T17:21:10Z | 6,722 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-07T17:02:07Z | https://api.github.com/repos/huggingface/datasets/issues/6722/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6722/timeline | Add details in docstring | https://api.github.com/repos/huggingface/datasets/issues/6722/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [] | null | null | CONTRIBUTOR | 2024-03-07T17:21:08Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6722.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6722",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6722.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6722"
} | PR_kwDODunzps5o_ch0 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6722). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6722/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6722 | https://github.com/huggingface/datasets/pull/6722 | true |
2,173,931,714 | https://api.github.com/repos/huggingface/datasets/issues/6721/labels{/name} | Hi, if I want to load the dataset from local file, then how to specify the configuration name?
_Originally posted by @WHU-gentle in https://github.com/huggingface/datasets/issues/2976#issuecomment-1333455222_
| 2024-03-31T08:09:25Z | 6,721 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-07T13:58:40Z | https://api.github.com/repos/huggingface/datasets/issues/6721/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6721/timeline | Hi,do you know how to load the dataset from local file now? | https://api.github.com/repos/huggingface/datasets/issues/6721/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/50232044?v=4",
"events_url": "https://api.github.com/users/Gera001/events{/privacy}",
"followers_url": "https://api.github.com/users/Gera001/followers",
"following_url": "https://api.github.com/users/Gera001/following{/other_user}",
"gists_url": "https://api.github.com/users/Gera001/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Gera001",
"id": 50232044,
"login": "Gera001",
"node_id": "MDQ6VXNlcjUwMjMyMDQ0",
"organizations_url": "https://api.github.com/users/Gera001/orgs",
"received_events_url": "https://api.github.com/users/Gera001/received_events",
"repos_url": "https://api.github.com/users/Gera001/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Gera001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gera001/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Gera001"
} | [] | null | null | NONE | null | null | I_kwDODunzps6Bk5DC | [
"\r\n@Gera001\r\n# Loading Dataset from Local Files Using 🤗Hugging Face.\r\n\r\nTo load a dataset from local files using the Hugging Face datasets library, you can use the `load_dataset` function.\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files={'train': 'path/to/train.csv',\r\n 'test': 'path/to/test.csv'})\r\n```\r\n\r\nReference to [HF Datasets docs for loading from local](https://huggingface.co/docs/datasets/en/loading#csv). \r\n\r\n@albertvillanova\r\nthis issue can be closed here.",
"like this: from datasets import load_from_disk\r\ndataset = load_from_disk(data_path)\r\n",
"@ge00009 \r\n> like this: from datasets import load_from_disk dataset = load_from_disk(data_path)\r\n\r\nLoads a dataset that was previously saved using `save_to_disk()`.\r\n\r\nReference link:\r\nhttps://huggingface.co/docs/datasets/en/package_reference/loading_methods#datasets.load_from_disk.example"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6721/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6721 | https://github.com/huggingface/datasets/issues/6721 | false |
2,173,603,459 | https://api.github.com/repos/huggingface/datasets/issues/6720/labels{/name} | ### Describe the bug
I am trying to get the HPLT datasets on the hub. Downloading/re-uploading would be too time- and resource consuming so I wrote [a dataset loader script](https://huggingface.co/datasets/BramVanroy/hplt_mono_v1_2/blob/main/hplt_mono_v1_2.py). I think I am very close but for some reason I always get the error below. It happens during the clean-up phase where the directory cannot be removed because it is not empty.
My only guess would be that this may have to do with zstandard
```
Traceback (most recent call last):
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1744, in _prepare_split_single
writer.write(example, key)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 492, in write
self.write_examples_on_file()
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 434, in write_examples_on_file
if self.schema
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 409, in schema
else (pa.schema(self._features.type) if self._features is not None else None)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1643, in type
return get_nested_type(self)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in get_nested_type
{key: get_nested_type(schema[key]) for key in schema}
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in <dictcomp>
{key: get_nested_type(schema[key]) for key in schema}
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1221, in get_nested_type
value_type = get_nested_type(schema.feature)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1228, in get_nested_type
return schema()
TypeError: 'str' object is not callable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1753, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 588, in finalize
self.write_examples_on_file()
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 434, in write_examples_on_file
if self.schema
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 409, in schema
else (pa.schema(self._features.type) if self._features is not None else None)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1643, in type
return get_nested_type(self)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in get_nested_type
{key: get_nested_type(schema[key]) for key in schema}
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in <dictcomp>
{key: get_nested_type(schema[key]) for key in schema}
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1221, in get_nested_type
value_type = get_nested_type(schema.feature)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1228, in get_nested_type
return schema()
TypeError: 'str' object is not callable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 959, in incomplete_dir
yield tmp_dir
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1005, in download_and_prepare
self._download_and_prepare(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1767, in _download_and_prepare
super()._download_and_prepare(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1100, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1605, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1762, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/pricie/vanroy/.config/JetBrains/PyCharm2023.3/scratches/scratch_5.py", line 4, in <module>
ds = load_dataset(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/load.py", line 2549, in load_dataset
builder_instance.download_and_prepare(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 985, in download_and_prepare
with incomplete_dir(self._output_dir) as tmp_output_dir:
File "/home/pricie/vanroy/.pyenv/versions/3.10.13/lib/python3.10/contextlib.py", line 153, in __exit__
self.gen.throw(typ, value, traceback)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 966, in incomplete_dir
shutil.rmtree(tmp_dir)
File "/home/pricie/vanroy/.pyenv/versions/3.10.13/lib/python3.10/shutil.py", line 731, in rmtree
onerror(os.rmdir, path, sys.exc_info())
File "/home/pricie/vanroy/.pyenv/versions/3.10.13/lib/python3.10/shutil.py", line 729, in rmtree
os.rmdir(path)
OSError: [Errno 39] Directory not empty: '/home/pricie/vanroy/.cache/huggingface/datasets/BramVanroy___hplt_mono_v1_2/ky/1.2.0/7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47.incomplete'
```
Interestingly, though, this directory _does_ appear to be empty:
```shell
> cd /home/pricie/vanroy/.cache/huggingface/datasets/BramVanroy___hplt_mono_v1_2/ky/1.2.0/7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47.incomplete
> ls -lah
total 0
drwxr-xr-x. 1 vanroy vanroy 0 Mar 7 12:01 .
drwxr-xr-x. 1 vanroy vanroy 304 Mar 7 11:52 ..
> cd ..
> ls
7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47_builder.lock 7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47.incomplete
```
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset(
"BramVanroy/hplt_mono_v1_2",
"ky",
trust_remote_code=True
)
```
### Expected behavior
No error.
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.34
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.2
- PyArrow version: 14.0.1
- Pandas version: 2.1.3
- `fsspec` version: 2023.10.0
| 2024-03-08T07:34:53Z | 6,720 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-07T11:07:09Z | https://api.github.com/repos/huggingface/datasets/issues/6720/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6720/timeline | TypeError: 'str' object is not callable | https://api.github.com/repos/huggingface/datasets/issues/6720/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy"
} | [] | null | completed | CONTRIBUTOR | 2024-03-07T15:13:58Z | null | I_kwDODunzps6Bjo6D | [
"Hi ! I opened a PR to fix an issue in the Features defined in your code\r\n\r\nBasically changing\r\n```python\r\nSequence(\"float32\")\r\n```\r\n\r\nto\r\n```python\r\nSequence(Value(\"float32\"))\r\n```\r\n\r\n\r\nhttps://huggingface.co/datasets/BramVanroy/hplt_mono_v1_2/discussions/1",
"D'oh! Was wondering why the `str() is not callable` was in there. Glad the error is my end though, and not related to zstandard (which I had not used in the past).\r\n\r\nThanks a lot!"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6720/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6720 | https://github.com/huggingface/datasets/issues/6720 | false |
2,169,585,727 | https://api.github.com/repos/huggingface/datasets/issues/6719/labels{/name} | ### Describe the bug
I am using an iterable dataset in a multi-node setup, trying to do training/inference while filtering the data on the fly. I usually do not use `split_dataset_by_node` but it is very slow using the IterableDatasetShard in `accelerate` and `transformers`. When I filter after applying `split_dataset_by_node`, it results in shards that are not equal sizes due to unequal samples filtered from each one.
The distributed process hangs when trying to accomplish this. Is there any way to resolve this or is it impossible to implement?
### Steps to reproduce the bug
Here is a toy example of what I am trying to do that reproduces the behavior
```
# torchrun --nproc-per-node 2 file.py
import os
import pandas as pd
import torch
from accelerate import Accelerator
from datasets import Features, Value, load_dataset
from datasets.distributed import split_dataset_by_node
from torch.utils.data import DataLoader
accelerator = Accelerator(device_placement=True, dispatch_batches=False)
if accelerator.is_main_process:
if not os.path.exists("scratch_data"):
os.mkdir("scratch_data")
n_shards = 4
for i in range(n_shards):
df = pd.DataFrame({"id": list(range(10 * i, 10 * (i + 1)))})
df.to_parquet(f"scratch_data/shard_{i}.parquet")
world_size = accelerator.num_processes
local_rank = accelerator.process_index
def collate_fn(examples):
input_ids = []
for example in examples:
input_ids.append(example["id"])
return torch.LongTensor(input_ids)
dataset = load_dataset(
"parquet", data_dir="scratch_data", split="train", streaming=True
)
dataset = (
split_dataset_by_node(dataset, rank=local_rank, world_size=world_size)
.filter(lambda x: x["id"] < 35)
.shuffle(seed=42, buffer_size=100)
)
batch_size = 2
train_dataloader = DataLoader(
dataset,
batch_size=batch_size,
collate_fn=collate_fn,
num_workers=2
)
for x in train_dataloader:
x = x.to(accelerator.device)
print({"rank": local_rank, "id": x})
y = accelerator.gather_for_metrics(x)
if accelerator.is_main_process:
print("gathered", y)
```
### Expected behavior
Is there any way to continue training/inference on the GPUs that have remaining data left without waiting for the others? Is it impossible to filter when
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.10.209-198.812.amzn2.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.21.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.1
- `fsspec` version: 2023.6.0 | 2024-03-05T15:55:13Z | 6,719 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-05T15:55:13Z | https://api.github.com/repos/huggingface/datasets/issues/6719/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6719/timeline | Is there any way to solve hanging of IterableDataset using split by node + filtering during inference | https://api.github.com/repos/huggingface/datasets/issues/6719/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8136905?v=4",
"events_url": "https://api.github.com/users/ssharpe42/events{/privacy}",
"followers_url": "https://api.github.com/users/ssharpe42/followers",
"following_url": "https://api.github.com/users/ssharpe42/following{/other_user}",
"gists_url": "https://api.github.com/users/ssharpe42/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ssharpe42",
"id": 8136905,
"login": "ssharpe42",
"node_id": "MDQ6VXNlcjgxMzY5MDU=",
"organizations_url": "https://api.github.com/users/ssharpe42/orgs",
"received_events_url": "https://api.github.com/users/ssharpe42/received_events",
"repos_url": "https://api.github.com/users/ssharpe42/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ssharpe42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ssharpe42/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ssharpe42"
} | [] | null | null | NONE | null | null | I_kwDODunzps6BUUA_ | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6719/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6719 | https://github.com/huggingface/datasets/issues/6719 | false |
2,169,468,488 | https://api.github.com/repos/huggingface/datasets/issues/6718/labels{/name} | I added `lock_importable_file` in `get_dataset_builder_class` and `extend_dataset_builder_for_streaming` to fix the issue, and I also added a test
cc @clefourrier | 2024-03-07T14:05:53Z | 6,718 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-05T15:04:20Z | https://api.github.com/repos/huggingface/datasets/issues/6718/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6718/timeline | Fix concurrent script loading with force_redownload | https://api.github.com/repos/huggingface/datasets/issues/6718/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2024-03-07T13:58:04Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6718.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6718",
"merged_at": "2024-03-07T13:58:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6718.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6718"
} | PR_kwDODunzps5ouwwE | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6718). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005074 / 0.011353 (-0.006279) | 0.003505 / 0.011008 (-0.007503) | 0.063683 / 0.038508 (0.025175) | 0.029308 / 0.023109 (0.006199) | 0.246648 / 0.275898 (-0.029250) | 0.265546 / 0.323480 (-0.057933) | 0.004108 / 0.007986 (-0.003878) | 0.002683 / 0.004328 (-0.001646) | 0.048634 / 0.004250 (0.044383) | 0.043786 / 0.037052 (0.006733) | 0.262197 / 0.258489 (0.003708) | 0.291582 / 0.293841 (-0.002259) | 0.027472 / 0.128546 (-0.101074) | 0.010213 / 0.075646 (-0.065434) | 0.206744 / 0.419271 (-0.212527) | 0.036195 / 0.043533 (-0.007337) | 0.249090 / 0.255139 (-0.006049) | 0.280002 / 0.283200 (-0.003198) | 0.018568 / 0.141683 (-0.123115) | 1.124844 / 1.452155 (-0.327311) | 1.159358 / 1.492716 (-0.333359) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093186 / 0.018006 (0.075180) | 0.302331 / 0.000490 (0.301842) | 0.000217 / 0.000200 (0.000017) | 0.000046 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018727 / 0.037411 (-0.018684) | 0.061730 / 0.014526 (0.047204) | 0.074330 / 0.176557 (-0.102226) | 0.119769 / 0.737135 (-0.617366) | 0.075611 / 0.296338 (-0.220727) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285063 / 0.215209 (0.069854) | 2.824809 / 2.077655 (0.747155) | 1.481858 / 1.504120 (-0.022262) | 1.350193 / 1.541195 (-0.191002) | 1.358012 / 1.468490 (-0.110478) | 0.557842 / 4.584777 (-4.026935) | 2.380729 / 3.745712 (-1.364983) | 2.798891 / 5.269862 (-2.470970) | 1.719288 / 4.565676 (-2.846388) | 0.061705 / 0.424275 (-0.362570) | 0.005431 / 0.007607 (-0.002176) | 0.343233 / 0.226044 (0.117189) | 3.375223 / 2.268929 (1.106295) | 1.838188 / 55.444624 (-53.606436) | 1.570015 / 6.876477 (-5.306461) | 1.573157 / 2.142072 (-0.568915) | 0.650678 / 4.805227 (-4.154549) | 0.116412 / 6.500664 (-6.384252) | 0.041754 / 0.075469 (-0.033715) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.970431 / 1.841788 (-0.871357) | 11.317128 / 8.074308 (3.242819) | 9.691240 / 10.191392 (-0.500152) | 0.142260 / 0.680424 (-0.538164) | 0.014131 / 0.534201 (-0.520070) | 0.289910 / 0.579283 (-0.289373) | 0.265648 / 0.434364 (-0.168715) | 0.323130 / 0.540337 (-0.217208) | 0.447005 / 1.386936 (-0.939931) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005322 / 0.011353 (-0.006031) | 0.003755 / 0.011008 (-0.007253) | 0.049646 / 0.038508 (0.011138) | 0.029669 / 0.023109 (0.006560) | 0.284151 / 0.275898 (0.008253) | 0.298351 / 0.323480 (-0.025128) | 0.004183 / 0.007986 (-0.003803) | 0.002683 / 0.004328 (-0.001645) | 0.048814 / 0.004250 (0.044563) | 0.045017 / 0.037052 (0.007965) | 0.287358 / 0.258489 (0.028869) | 0.317394 / 0.293841 (0.023553) | 0.030025 / 0.128546 (-0.098521) | 0.010854 / 0.075646 (-0.064793) | 0.058694 / 0.419271 (-0.360578) | 0.052287 / 0.043533 (0.008754) | 0.279038 / 0.255139 (0.023899) | 0.295442 / 0.283200 (0.012242) | 0.019413 / 0.141683 (-0.122270) | 1.146106 / 1.452155 (-0.306048) | 1.197777 / 1.492716 (-0.294939) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092191 / 0.018006 (0.074184) | 0.302672 / 0.000490 (0.302182) | 0.000623 / 0.000200 (0.000423) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022067 / 0.037411 (-0.015345) | 0.081760 / 0.014526 (0.067235) | 0.087548 / 0.176557 (-0.089009) | 0.126405 / 0.737135 (-0.610730) | 0.089331 / 0.296338 (-0.207008) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295821 / 0.215209 (0.080612) | 2.897930 / 2.077655 (0.820276) | 1.604500 / 1.504120 (0.100380) | 1.471502 / 1.541195 (-0.069692) | 1.497918 / 1.468490 (0.029428) | 0.576179 / 4.584777 (-4.008598) | 2.452103 / 3.745712 (-1.293609) | 2.668043 / 5.269862 (-2.601818) | 1.753544 / 4.565676 (-2.812133) | 0.064410 / 0.424275 (-0.359865) | 0.005027 / 0.007607 (-0.002580) | 0.351509 / 0.226044 (0.125465) | 3.479208 / 2.268929 (1.210280) | 1.990356 / 55.444624 (-53.454269) | 1.684920 / 6.876477 (-5.191556) | 1.794251 / 2.142072 (-0.347821) | 0.662692 / 4.805227 (-4.142535) | 0.118589 / 6.500664 (-6.382076) | 0.040813 / 0.075469 (-0.034656) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.002390 / 1.841788 (-0.839398) | 12.004617 / 8.074308 (3.930309) | 10.216005 / 10.191392 (0.024613) | 0.154354 / 0.680424 (-0.526070) | 0.015554 / 0.534201 (-0.518647) | 0.288741 / 0.579283 (-0.290542) | 0.276774 / 0.434364 (-0.157590) | 0.327055 / 0.540337 (-0.213282) | 0.435121 / 1.386936 (-0.951815) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f45bc6caa25115a04c41b278671a5a89457eb66c \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6718/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6718 | https://github.com/huggingface/datasets/pull/6718 | true |
2,168,726,432 | https://api.github.com/repos/huggingface/datasets/issues/6717/labels{/name} | ### Describe the bug
When loading a HF dataset in streaming mode and removing some columns, it is impossible to load a sample if the audio contains more than one channel. I have the impression that the time axis and channels are swapped or concatenated.
### Steps to reproduce the bug
Minimal error code:
```python
from datasets import load_dataset
dataset_name = "zinc75/Vibravox_dummy"
config_name = "BWE_Larynx_microphone"
# if we use "ASR_Larynx_microphone" subset which is a monochannel audio, no error is thrown.
dataset = load_dataset(
path=dataset_name, name=config_name, split="train", streaming=True
)
dataset = dataset.remove_columns(["sensor_id"])
# dataset = dataset.map(lambda x:x, remove_columns=["sensor_id"])
# The commented version does not produce an error, but loses the dataset features.
sample = next(iter(dataset))
```
Error:
```
Traceback (most recent call last):
File "/home/julien/Bureau/github/vibravox/tmp.py", line 15, in <module>
sample = next(iter(dataset))
^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1392, in __iter__
example = _apply_feature_types_on_example(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1080, in _apply_feature_types_on_example
encoded_example = features.encode_example(example)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/features/features.py", line 1889, in encode_example
return encode_nested_example(self, example)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/features/features.py", line 1244, in encode_nested_example
{k: encode_nested_example(schema[k], obj.get(k), level=level + 1) for k in schema}
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/features/features.py", line 1244, in <dictcomp>
{k: encode_nested_example(schema[k], obj.get(k), level=level + 1) for k in schema}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/features/features.py", line 1300, in encode_nested_example
return schema.encode_example(obj) if obj is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/features/audio.py", line 98, in encode_example
sf.write(buffer, value["array"], value["sampling_rate"], format="wav")
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/soundfile.py", line 343, in write
with SoundFile(file, 'w', samplerate, channels,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/soundfile.py", line 658, in __init__
self._file = self._open(file, mode_int, closefd)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/soundfile.py", line 1216, in _open
raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name))
soundfile.LibsndfileError: Error opening <_io.BytesIO object at 0x7fd795d24680>: Format not recognised.
Process finished with exit code 1
```
### Expected behavior
I would expect this code to run without error.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-6.5.0-21-generic-x86_64-with-glibc2.35
- Python version: 3.11.0
- `huggingface_hub` version: 0.21.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.1
- `fsspec` version: 2023.10.0 | 2024-03-05T10:32:19Z | 6,717 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-05T09:33:26Z | https://api.github.com/repos/huggingface/datasets/issues/6717/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6717/timeline | `remove_columns` method used with a streaming enable dataset mode produces a LibsndfileError on multichannel audio | https://api.github.com/repos/huggingface/datasets/issues/6717/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/53187038?v=4",
"events_url": "https://api.github.com/users/jhauret/events{/privacy}",
"followers_url": "https://api.github.com/users/jhauret/followers",
"following_url": "https://api.github.com/users/jhauret/following{/other_user}",
"gists_url": "https://api.github.com/users/jhauret/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jhauret",
"id": 53187038,
"login": "jhauret",
"node_id": "MDQ6VXNlcjUzMTg3MDM4",
"organizations_url": "https://api.github.com/users/jhauret/orgs",
"received_events_url": "https://api.github.com/users/jhauret/received_events",
"repos_url": "https://api.github.com/users/jhauret/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jhauret/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jhauret/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jhauret"
} | [] | null | null | NONE | null | null | I_kwDODunzps6BRCOg | [
"And it also works well with `dataset = dataset.select_columns([\"audio\"])`"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6717/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6717 | https://github.com/huggingface/datasets/issues/6717 | false |
2,168,706,558 | https://api.github.com/repos/huggingface/datasets/issues/6716/labels{/name} | ### Describe the bug
I'm not sure if this is a bug, but `print(ds.builder_name)` in the following code sometimes prints out `rotten_tomatoes` instead of `parquet`:
```python
import datasets
for _ in range(100):
ds = datasets.load_dataset("rotten_tomatoes", split="train")
print(ds.builder_name) # prints out "rotten_tomatoes" sometimes instead of "parquet"
```
Output:
```
...
parquet
parquet
parquet
rotten_tomatoes
parquet
parquet
parquet
...
```
Here's a reproduction using GitHub Actions:
https://github.com/mlflow/mlflow/actions/runs/8153247984/job/22284263613?pr=11329#step:12:241
One of our tests is flaky because `builder_name` is not deterministic.
### Steps to reproduce the bug
1. Run the code above.
### Expected behavior
Always prints out `parquet`?
### Environment info
```
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.18.0
- Platform: Linux-6.5.0-1015-azure-x86_64-with-glibc2.34
- Python version: 3.8.18
- `huggingface_hub` version: 0.21.3
- PyArrow version: 15.0.0
- Pandas version: 2.0.3
- `fsspec` version: 2024.2.0
``` | 2024-03-19T07:58:14Z | 6,716 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-05T09:23:21Z | https://api.github.com/repos/huggingface/datasets/issues/6716/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6716/timeline | Non-deterministic `Dataset.builder_name` value | https://api.github.com/repos/huggingface/datasets/issues/6716/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/17039389?v=4",
"events_url": "https://api.github.com/users/harupy/events{/privacy}",
"followers_url": "https://api.github.com/users/harupy/followers",
"following_url": "https://api.github.com/users/harupy/following{/other_user}",
"gists_url": "https://api.github.com/users/harupy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/harupy",
"id": 17039389,
"login": "harupy",
"node_id": "MDQ6VXNlcjE3MDM5Mzg5",
"organizations_url": "https://api.github.com/users/harupy/orgs",
"received_events_url": "https://api.github.com/users/harupy/received_events",
"repos_url": "https://api.github.com/users/harupy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/harupy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harupy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/harupy"
} | [] | null | completed | NONE | 2024-03-19T07:58:14Z | null | I_kwDODunzps6BQ9X- | [
"When `rotten_tomatoes` is printed out, the following warning message is also printed out:\r\n\r\n```\r\nYou can avoid this message in future by passing the argument `trust_remote_code=True`.\r\nPassing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`.\r\n```",
"Hi ! This behavior happens because the dataset was originakky created using a dataset script [rotten_tomatoes.py](https://huggingface.co/datasets/rotten_tomatoes/blob/26f40d324d7b281d8b3fb1c47f30f8b9957f206b/rotten_tomatoes.py) and because we added features recently allowing to download the dataset directly from Parquet files (parquet builder) without running the dataset script (rotten_tomatoes). The flakiness must come from the availability of the Parquet files (we automatically export them in the refs/convert/parquet branch and we recently had to move some files).\r\n\r\nAnyway the easy fix on our side is to remove the dataset script completely, let me open a PR at https://huggingface.co/datasets/rotten_tomatoes\r\n\r\nEDIT: opened https://huggingface.co/datasets/rotten_tomatoes/discussions/6, feel free to comment there if you're ok with that change",
"@lhoestq Thanks for the comment, explanation, and patch!",
"> we automatically export them in the refs/convert/parquet branch\r\n\r\nWhen this operation is in progress, the parquet files become temporarily unavailable?",
"> When this operation is in progress, the parquet files become temporarily unavailable?\r\n\r\nYes correct. I just merged the patch btw :)",
"@lhoestq Thanks for merging the PR! I think this issue can be closed."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6716/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6716 | https://github.com/huggingface/datasets/issues/6716 | false |
2,167,747,095 | https://api.github.com/repos/huggingface/datasets/issues/6715/labels{/name} | A sliced + pickled ConcatenationTable could end up with a different schema than the original schema, if the slice only contains blocks with only a subset of the columns.
This can lead to issues when saving datasets from a concatenation of datasets with mixed schemas
Reported in https://discuss.huggingface.co/t/datasetdict-save-to-disk-with-num-proc-1-seems-to-hang-with-error/75595 | 2024-03-05T11:23:05Z | 6,715 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-04T21:02:07Z | https://api.github.com/repos/huggingface/datasets/issues/6715/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6715/timeline | Fix sliced ConcatenationTable pickling with mixed schemas vertically | https://api.github.com/repos/huggingface/datasets/issues/6715/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2024-03-05T11:17:04Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6715.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6715",
"merged_at": "2024-03-05T11:17:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6715.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6715"
} | PR_kwDODunzps5oo36i | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6715). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005294 / 0.011353 (-0.006059) | 0.003598 / 0.011008 (-0.007411) | 0.062798 / 0.038508 (0.024290) | 0.027479 / 0.023109 (0.004370) | 0.247146 / 0.275898 (-0.028752) | 0.272103 / 0.323480 (-0.051377) | 0.002979 / 0.007986 (-0.005007) | 0.002701 / 0.004328 (-0.001628) | 0.049384 / 0.004250 (0.045134) | 0.041562 / 0.037052 (0.004510) | 0.269924 / 0.258489 (0.011435) | 0.290749 / 0.293841 (-0.003092) | 0.028285 / 0.128546 (-0.100261) | 0.010464 / 0.075646 (-0.065183) | 0.207000 / 0.419271 (-0.212272) | 0.036186 / 0.043533 (-0.007347) | 0.254524 / 0.255139 (-0.000615) | 0.274843 / 0.283200 (-0.008356) | 0.020044 / 0.141683 (-0.121638) | 1.119223 / 1.452155 (-0.332931) | 1.156557 / 1.492716 (-0.336159) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092014 / 0.018006 (0.074008) | 0.297349 / 0.000490 (0.296859) | 0.000205 / 0.000200 (0.000005) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018617 / 0.037411 (-0.018794) | 0.061879 / 0.014526 (0.047354) | 0.072877 / 0.176557 (-0.103680) | 0.121850 / 0.737135 (-0.615286) | 0.074686 / 0.296338 (-0.221653) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281204 / 0.215209 (0.065995) | 2.728688 / 2.077655 (0.651033) | 1.469659 / 1.504120 (-0.034461) | 1.355306 / 1.541195 (-0.185889) | 1.350598 / 1.468490 (-0.117892) | 0.563669 / 4.584777 (-4.021108) | 2.377177 / 3.745712 (-1.368535) | 2.767402 / 5.269862 (-2.502460) | 1.720188 / 4.565676 (-2.845489) | 0.062594 / 0.424275 (-0.361681) | 0.005004 / 0.007607 (-0.002603) | 0.333017 / 0.226044 (0.106972) | 3.354543 / 2.268929 (1.085615) | 1.840031 / 55.444624 (-53.604593) | 1.545548 / 6.876477 (-5.330929) | 1.569858 / 2.142072 (-0.572214) | 0.642680 / 4.805227 (-4.162547) | 0.117463 / 6.500664 (-6.383201) | 0.042472 / 0.075469 (-0.032997) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977436 / 1.841788 (-0.864351) | 11.285982 / 8.074308 (3.211673) | 9.441848 / 10.191392 (-0.749544) | 0.140773 / 0.680424 (-0.539650) | 0.013783 / 0.534201 (-0.520418) | 0.292304 / 0.579283 (-0.286979) | 0.275011 / 0.434364 (-0.159353) | 0.339094 / 0.540337 (-0.201244) | 0.447593 / 1.386936 (-0.939343) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005258 / 0.011353 (-0.006095) | 0.003539 / 0.011008 (-0.007469) | 0.049920 / 0.038508 (0.011412) | 0.029789 / 0.023109 (0.006680) | 0.277187 / 0.275898 (0.001288) | 0.296817 / 0.323480 (-0.026663) | 0.004133 / 0.007986 (-0.003852) | 0.002679 / 0.004328 (-0.001649) | 0.048999 / 0.004250 (0.044749) | 0.044087 / 0.037052 (0.007034) | 0.290359 / 0.258489 (0.031870) | 0.319572 / 0.293841 (0.025731) | 0.030248 / 0.128546 (-0.098298) | 0.010453 / 0.075646 (-0.065194) | 0.058734 / 0.419271 (-0.360537) | 0.051216 / 0.043533 (0.007683) | 0.278667 / 0.255139 (0.023528) | 0.298792 / 0.283200 (0.015592) | 0.019131 / 0.141683 (-0.122552) | 1.131814 / 1.452155 (-0.320340) | 1.167208 / 1.492716 (-0.325508) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088316 / 0.018006 (0.070309) | 0.297143 / 0.000490 (0.296653) | 0.000207 / 0.000200 (0.000007) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022457 / 0.037411 (-0.014954) | 0.075251 / 0.014526 (0.060726) | 0.086747 / 0.176557 (-0.089809) | 0.124975 / 0.737135 (-0.612161) | 0.087320 / 0.296338 (-0.209019) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292339 / 0.215209 (0.077130) | 2.860196 / 2.077655 (0.782541) | 1.599058 / 1.504120 (0.094938) | 1.476104 / 1.541195 (-0.065091) | 1.509109 / 1.468490 (0.040619) | 0.564056 / 4.584777 (-4.020721) | 2.388870 / 3.745712 (-1.356842) | 2.582356 / 5.269862 (-2.687506) | 1.726033 / 4.565676 (-2.839644) | 0.061788 / 0.424275 (-0.362487) | 0.005021 / 0.007607 (-0.002586) | 0.345644 / 0.226044 (0.119600) | 3.384000 / 2.268929 (1.115071) | 1.946591 / 55.444624 (-53.498033) | 1.693485 / 6.876477 (-5.182992) | 1.790300 / 2.142072 (-0.351773) | 0.654637 / 4.805227 (-4.150590) | 0.116271 / 6.500664 (-6.384393) | 0.040710 / 0.075469 (-0.034759) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.007367 / 1.841788 (-0.834421) | 11.868065 / 8.074308 (3.793757) | 10.146212 / 10.191392 (-0.045180) | 0.128902 / 0.680424 (-0.551522) | 0.015259 / 0.534201 (-0.518942) | 0.288087 / 0.579283 (-0.291196) | 0.281516 / 0.434364 (-0.152848) | 0.325755 / 0.540337 (-0.214583) | 0.424814 / 1.386936 (-0.962122) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8247202a7ed1c3164c88f8f183513c5f003aa2af \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6715/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6715 | https://github.com/huggingface/datasets/pull/6715 | true |
2,167,569,080 | https://api.github.com/repos/huggingface/datasets/issues/6714/labels{/name} | E.g., to have info about a dataset's number of examples for more informative TQDM bars. | 2024-03-04T20:28:30Z | 6,714 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-04T19:18:10Z | https://api.github.com/repos/huggingface/datasets/issues/6714/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6714/timeline | Expand no-code dataset info with datasets-server info | https://api.github.com/repos/huggingface/datasets/issues/6714/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2024-03-04T20:22:15Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6714.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6714",
"merged_at": "2024-03-04T20:22:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6714.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6714"
} | PR_kwDODunzps5ooQd2 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6714). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005237 / 0.011353 (-0.006116) | 0.003614 / 0.011008 (-0.007394) | 0.063349 / 0.038508 (0.024841) | 0.027297 / 0.023109 (0.004187) | 0.236203 / 0.275898 (-0.039695) | 0.260029 / 0.323480 (-0.063451) | 0.003096 / 0.007986 (-0.004889) | 0.003342 / 0.004328 (-0.000987) | 0.048703 / 0.004250 (0.044453) | 0.043121 / 0.037052 (0.006069) | 0.257491 / 0.258489 (-0.000998) | 0.282861 / 0.293841 (-0.010980) | 0.027701 / 0.128546 (-0.100845) | 0.010634 / 0.075646 (-0.065012) | 0.207369 / 0.419271 (-0.211903) | 0.035799 / 0.043533 (-0.007734) | 0.240445 / 0.255139 (-0.014694) | 0.261977 / 0.283200 (-0.021223) | 0.018175 / 0.141683 (-0.123508) | 1.143964 / 1.452155 (-0.308191) | 1.230057 / 1.492716 (-0.262659) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096656 / 0.018006 (0.078650) | 0.303434 / 0.000490 (0.302944) | 0.000225 / 0.000200 (0.000025) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018454 / 0.037411 (-0.018957) | 0.061792 / 0.014526 (0.047266) | 0.073384 / 0.176557 (-0.103172) | 0.120148 / 0.737135 (-0.616988) | 0.074221 / 0.296338 (-0.222118) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290291 / 0.215209 (0.075082) | 2.822908 / 2.077655 (0.745254) | 1.483139 / 1.504120 (-0.020981) | 1.349619 / 1.541195 (-0.191576) | 1.356588 / 1.468490 (-0.111902) | 0.571723 / 4.584777 (-4.013054) | 2.402696 / 3.745712 (-1.343016) | 2.832215 / 5.269862 (-2.437647) | 1.794962 / 4.565676 (-2.770714) | 0.062707 / 0.424275 (-0.361568) | 0.004997 / 0.007607 (-0.002610) | 0.343093 / 0.226044 (0.117049) | 3.383028 / 2.268929 (1.114100) | 1.818624 / 55.444624 (-53.626000) | 1.549859 / 6.876477 (-5.326618) | 1.667838 / 2.142072 (-0.474235) | 0.648574 / 4.805227 (-4.156653) | 0.119181 / 6.500664 (-6.381484) | 0.042074 / 0.075469 (-0.033395) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982039 / 1.841788 (-0.859748) | 11.411759 / 8.074308 (3.337451) | 9.783405 / 10.191392 (-0.407987) | 0.129577 / 0.680424 (-0.550847) | 0.014091 / 0.534201 (-0.520110) | 0.297925 / 0.579283 (-0.281358) | 0.263884 / 0.434364 (-0.170480) | 0.346032 / 0.540337 (-0.194305) | 0.444806 / 1.386936 (-0.942130) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005527 / 0.011353 (-0.005826) | 0.003677 / 0.011008 (-0.007332) | 0.050245 / 0.038508 (0.011737) | 0.030070 / 0.023109 (0.006961) | 0.272640 / 0.275898 (-0.003258) | 0.296555 / 0.323480 (-0.026925) | 0.004247 / 0.007986 (-0.003738) | 0.003833 / 0.004328 (-0.000495) | 0.049341 / 0.004250 (0.045091) | 0.046604 / 0.037052 (0.009552) | 0.282765 / 0.258489 (0.024276) | 0.314924 / 0.293841 (0.021084) | 0.029749 / 0.128546 (-0.098797) | 0.010524 / 0.075646 (-0.065122) | 0.057859 / 0.419271 (-0.361412) | 0.053172 / 0.043533 (0.009640) | 0.274906 / 0.255139 (0.019767) | 0.290566 / 0.283200 (0.007366) | 0.019299 / 0.141683 (-0.122384) | 1.164092 / 1.452155 (-0.288062) | 1.205074 / 1.492716 (-0.287642) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093943 / 0.018006 (0.075936) | 0.298746 / 0.000490 (0.298256) | 0.000232 / 0.000200 (0.000032) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022098 / 0.037411 (-0.015313) | 0.075523 / 0.014526 (0.060997) | 0.086784 / 0.176557 (-0.089773) | 0.124610 / 0.737135 (-0.612525) | 0.087743 / 0.296338 (-0.208595) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298555 / 0.215209 (0.083346) | 2.951493 / 2.077655 (0.873838) | 1.611448 / 1.504120 (0.107328) | 1.481503 / 1.541195 (-0.059692) | 1.497937 / 1.468490 (0.029447) | 0.580402 / 4.584777 (-4.004375) | 2.433308 / 3.745712 (-1.312404) | 2.712717 / 5.269862 (-2.557145) | 1.766286 / 4.565676 (-2.799391) | 0.063973 / 0.424275 (-0.360303) | 0.005006 / 0.007607 (-0.002601) | 0.354541 / 0.226044 (0.128497) | 3.486448 / 2.268929 (1.217519) | 1.972779 / 55.444624 (-53.471846) | 1.709018 / 6.876477 (-5.167458) | 1.864242 / 2.142072 (-0.277831) | 0.678213 / 4.805227 (-4.127014) | 0.119525 / 6.500664 (-6.381140) | 0.041387 / 0.075469 (-0.034082) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.021337 / 1.841788 (-0.820451) | 12.049563 / 8.074308 (3.975255) | 10.424701 / 10.191392 (0.233309) | 0.131444 / 0.680424 (-0.548980) | 0.015644 / 0.534201 (-0.518557) | 0.293712 / 0.579283 (-0.285571) | 0.279160 / 0.434364 (-0.155204) | 0.327991 / 0.540337 (-0.212346) | 0.435455 / 1.386936 (-0.951481) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1fe9483acc1ccaf19f3c199b99391921a8526215 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6714/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6714 | https://github.com/huggingface/datasets/pull/6714 | true |
2,166,797,560 | https://api.github.com/repos/huggingface/datasets/issues/6713/labels{/name} | This should fix the version compatibility issue when using `huggingface_hub` < 0.21.2 and latest fsspec (>=2023.12.0).
See my comment: https://github.com/huggingface/datasets/pull/6687#issuecomment-1976493336
>> EDIT: the fix has been released in `huggingface_hub` 0.21.2 - I removed my commits that were using `huggingface_hub@main`
>
>Please note that people using `huggingface_hub` < 0.21.2 and latest `fsspec` will have issues when using `datasets`:
>- https://github.com/huggingface/lighteval/actions/runs/8139147047/job/22241658122?pr=86
>- https://github.com/huggingface/lighteval/pull/84
CC: @clefourrier
| 2024-03-04T18:14:03Z | 6,713 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-04T13:00:52Z | https://api.github.com/repos/huggingface/datasets/issues/6713/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6713/timeline | Bump huggingface-hub lower version to 0.21.2 | https://api.github.com/repos/huggingface/datasets/issues/6713/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | null | MEMBER | 2024-03-04T18:06:05Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6713.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6713",
"merged_at": "2024-03-04T18:06:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6713.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6713"
} | PR_kwDODunzps5olmqh | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6713). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@lhoestq if you agree, I could make a patch release tomorrow morning.",
"sure :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005086 / 0.011353 (-0.006267) | 0.003695 / 0.011008 (-0.007313) | 0.063430 / 0.038508 (0.024922) | 0.026798 / 0.023109 (0.003689) | 0.253761 / 0.275898 (-0.022138) | 0.301301 / 0.323480 (-0.022179) | 0.004160 / 0.007986 (-0.003825) | 0.002783 / 0.004328 (-0.001545) | 0.050698 / 0.004250 (0.046448) | 0.040899 / 0.037052 (0.003846) | 0.269024 / 0.258489 (0.010535) | 0.323467 / 0.293841 (0.029626) | 0.027756 / 0.128546 (-0.100791) | 0.010684 / 0.075646 (-0.064963) | 0.207128 / 0.419271 (-0.212144) | 0.035874 / 0.043533 (-0.007659) | 0.251620 / 0.255139 (-0.003519) | 0.268668 / 0.283200 (-0.014532) | 0.017387 / 0.141683 (-0.124296) | 1.139230 / 1.452155 (-0.312925) | 1.183613 / 1.492716 (-0.309103) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096337 / 0.018006 (0.078331) | 0.305014 / 0.000490 (0.304524) | 0.000219 / 0.000200 (0.000019) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018086 / 0.037411 (-0.019325) | 0.061626 / 0.014526 (0.047100) | 0.072598 / 0.176557 (-0.103959) | 0.119944 / 0.737135 (-0.617192) | 0.074549 / 0.296338 (-0.221789) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282661 / 0.215209 (0.067452) | 2.804473 / 2.077655 (0.726818) | 1.444602 / 1.504120 (-0.059517) | 1.313977 / 1.541195 (-0.227217) | 1.319426 / 1.468490 (-0.149064) | 0.570176 / 4.584777 (-4.014601) | 2.397895 / 3.745712 (-1.347818) | 2.760208 / 5.269862 (-2.509654) | 1.732457 / 4.565676 (-2.833220) | 0.062743 / 0.424275 (-0.361533) | 0.004950 / 0.007607 (-0.002657) | 0.338500 / 0.226044 (0.112456) | 3.287249 / 2.268929 (1.018320) | 1.777495 / 55.444624 (-53.667130) | 1.521255 / 6.876477 (-5.355222) | 1.517317 / 2.142072 (-0.624756) | 0.642202 / 4.805227 (-4.163025) | 0.116501 / 6.500664 (-6.384163) | 0.042418 / 0.075469 (-0.033052) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.968966 / 1.841788 (-0.872822) | 11.490531 / 8.074308 (3.416223) | 9.507803 / 10.191392 (-0.683589) | 0.141570 / 0.680424 (-0.538854) | 0.014000 / 0.534201 (-0.520201) | 0.284237 / 0.579283 (-0.295046) | 0.269341 / 0.434364 (-0.165022) | 0.321654 / 0.540337 (-0.218683) | 0.446914 / 1.386936 (-0.940022) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005280 / 0.011353 (-0.006072) | 0.003794 / 0.011008 (-0.007214) | 0.050328 / 0.038508 (0.011820) | 0.029756 / 0.023109 (0.006647) | 0.273403 / 0.275898 (-0.002495) | 0.297346 / 0.323480 (-0.026133) | 0.004310 / 0.007986 (-0.003676) | 0.002858 / 0.004328 (-0.001470) | 0.048833 / 0.004250 (0.044583) | 0.045696 / 0.037052 (0.008644) | 0.291034 / 0.258489 (0.032545) | 0.318899 / 0.293841 (0.025058) | 0.029809 / 0.128546 (-0.098737) | 0.010710 / 0.075646 (-0.064936) | 0.058183 / 0.419271 (-0.361089) | 0.051761 / 0.043533 (0.008228) | 0.275022 / 0.255139 (0.019883) | 0.291614 / 0.283200 (0.008414) | 0.017975 / 0.141683 (-0.123708) | 1.148489 / 1.452155 (-0.303666) | 1.218111 / 1.492716 (-0.274605) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091806 / 0.018006 (0.073799) | 0.299413 / 0.000490 (0.298923) | 0.000219 / 0.000200 (0.000019) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021506 / 0.037411 (-0.015905) | 0.075537 / 0.014526 (0.061011) | 0.087020 / 0.176557 (-0.089536) | 0.125270 / 0.737135 (-0.611865) | 0.088038 / 0.296338 (-0.208300) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300401 / 0.215209 (0.085192) | 2.932571 / 2.077655 (0.854916) | 1.609502 / 1.504120 (0.105383) | 1.480078 / 1.541195 (-0.061117) | 1.514902 / 1.468490 (0.046412) | 0.575591 / 4.584777 (-4.009186) | 2.461873 / 3.745712 (-1.283839) | 2.728099 / 5.269862 (-2.541762) | 1.760054 / 4.565676 (-2.805622) | 0.064371 / 0.424275 (-0.359904) | 0.004990 / 0.007607 (-0.002617) | 0.350134 / 0.226044 (0.124090) | 3.453249 / 2.268929 (1.184321) | 1.979760 / 55.444624 (-53.464865) | 1.741128 / 6.876477 (-5.135348) | 1.825734 / 2.142072 (-0.316339) | 0.654902 / 4.805227 (-4.150325) | 0.116989 / 6.500664 (-6.383676) | 0.040800 / 0.075469 (-0.034669) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.033352 / 1.841788 (-0.808436) | 12.196711 / 8.074308 (4.122403) | 10.315114 / 10.191392 (0.123722) | 0.132541 / 0.680424 (-0.547882) | 0.016455 / 0.534201 (-0.517746) | 0.289025 / 0.579283 (-0.290258) | 0.281464 / 0.434364 (-0.152900) | 0.325302 / 0.540337 (-0.215036) | 0.428469 / 1.386936 (-0.958467) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7093b4b1a69f413e452119c87669af9e8ceaf749 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6713/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6713 | https://github.com/huggingface/datasets/pull/6713 | true |
2,166,588,373 | https://api.github.com/repos/huggingface/datasets/issues/6712/labels{/name} | reported in https://discuss.huggingface.co/t/datasetdict-save-to-disk-with-num-proc-1-seems-to-hang-with-error/75595 | 2024-03-04T20:23:47Z | 6,712 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-04T11:14:18Z | https://api.github.com/repos/huggingface/datasets/issues/6712/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6712/timeline | fix CastError pickling | https://api.github.com/repos/huggingface/datasets/issues/6712/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2024-03-04T20:17:17Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6712.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6712",
"merged_at": "2024-03-04T20:17:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6712.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6712"
} | PR_kwDODunzps5ok4VF | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6712). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005151 / 0.011353 (-0.006202) | 0.003813 / 0.011008 (-0.007196) | 0.062957 / 0.038508 (0.024449) | 0.028282 / 0.023109 (0.005173) | 0.246036 / 0.275898 (-0.029862) | 0.290024 / 0.323480 (-0.033456) | 0.004009 / 0.007986 (-0.003977) | 0.002749 / 0.004328 (-0.001580) | 0.049351 / 0.004250 (0.045101) | 0.041143 / 0.037052 (0.004090) | 0.264782 / 0.258489 (0.006293) | 0.290711 / 0.293841 (-0.003130) | 0.027248 / 0.128546 (-0.101298) | 0.010691 / 0.075646 (-0.064955) | 0.205926 / 0.419271 (-0.213345) | 0.035652 / 0.043533 (-0.007880) | 0.246357 / 0.255139 (-0.008782) | 0.267851 / 0.283200 (-0.015348) | 0.018498 / 0.141683 (-0.123185) | 1.135996 / 1.452155 (-0.316159) | 1.181841 / 1.492716 (-0.310875) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094054 / 0.018006 (0.076048) | 0.305470 / 0.000490 (0.304980) | 0.000225 / 0.000200 (0.000025) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018842 / 0.037411 (-0.018569) | 0.061532 / 0.014526 (0.047006) | 0.073483 / 0.176557 (-0.103073) | 0.119426 / 0.737135 (-0.617709) | 0.075385 / 0.296338 (-0.220954) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285544 / 0.215209 (0.070335) | 2.774256 / 2.077655 (0.696601) | 1.475719 / 1.504120 (-0.028401) | 1.353841 / 1.541195 (-0.187353) | 1.381891 / 1.468490 (-0.086599) | 0.570619 / 4.584777 (-4.014158) | 2.380300 / 3.745712 (-1.365412) | 2.788767 / 5.269862 (-2.481095) | 1.741790 / 4.565676 (-2.823886) | 0.061810 / 0.424275 (-0.362465) | 0.005004 / 0.007607 (-0.002603) | 0.334963 / 0.226044 (0.108918) | 3.286388 / 2.268929 (1.017459) | 1.831669 / 55.444624 (-53.612955) | 1.523372 / 6.876477 (-5.353105) | 1.581551 / 2.142072 (-0.560521) | 0.639642 / 4.805227 (-4.165585) | 0.117356 / 6.500664 (-6.383308) | 0.043277 / 0.075469 (-0.032192) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.973005 / 1.841788 (-0.868782) | 11.590148 / 8.074308 (3.515839) | 9.521262 / 10.191392 (-0.670130) | 0.143243 / 0.680424 (-0.537181) | 0.013529 / 0.534201 (-0.520672) | 0.285724 / 0.579283 (-0.293559) | 0.265642 / 0.434364 (-0.168721) | 0.366098 / 0.540337 (-0.174239) | 0.444410 / 1.386936 (-0.942526) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005347 / 0.011353 (-0.006006) | 0.003797 / 0.011008 (-0.007212) | 0.050441 / 0.038508 (0.011933) | 0.032812 / 0.023109 (0.009703) | 0.281278 / 0.275898 (0.005379) | 0.304524 / 0.323480 (-0.018956) | 0.005039 / 0.007986 (-0.002946) | 0.002735 / 0.004328 (-0.001594) | 0.049184 / 0.004250 (0.044933) | 0.046751 / 0.037052 (0.009698) | 0.292093 / 0.258489 (0.033604) | 0.322087 / 0.293841 (0.028246) | 0.029775 / 0.128546 (-0.098771) | 0.010540 / 0.075646 (-0.065106) | 0.057927 / 0.419271 (-0.361345) | 0.054240 / 0.043533 (0.010707) | 0.281537 / 0.255139 (0.026398) | 0.298386 / 0.283200 (0.015186) | 0.019773 / 0.141683 (-0.121910) | 1.157161 / 1.452155 (-0.294994) | 1.210395 / 1.492716 (-0.282321) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095098 / 0.018006 (0.077091) | 0.306952 / 0.000490 (0.306462) | 0.000211 / 0.000200 (0.000011) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022602 / 0.037411 (-0.014809) | 0.075242 / 0.014526 (0.060716) | 0.087134 / 0.176557 (-0.089422) | 0.127923 / 0.737135 (-0.609212) | 0.088645 / 0.296338 (-0.207693) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.304187 / 0.215209 (0.088978) | 2.977120 / 2.077655 (0.899465) | 1.663592 / 1.504120 (0.159473) | 1.527601 / 1.541195 (-0.013594) | 1.540121 / 1.468490 (0.071631) | 0.562492 / 4.584777 (-4.022285) | 2.473836 / 3.745712 (-1.271876) | 2.656782 / 5.269862 (-2.613080) | 1.754212 / 4.565676 (-2.811464) | 0.062330 / 0.424275 (-0.361945) | 0.005149 / 0.007607 (-0.002459) | 0.354905 / 0.226044 (0.128860) | 3.503587 / 2.268929 (1.234659) | 2.015682 / 55.444624 (-53.428943) | 1.744421 / 6.876477 (-5.132056) | 1.923120 / 2.142072 (-0.218952) | 0.652209 / 4.805227 (-4.153018) | 0.119406 / 6.500664 (-6.381258) | 0.042840 / 0.075469 (-0.032630) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.009164 / 1.841788 (-0.832624) | 12.379654 / 8.074308 (4.305346) | 10.408696 / 10.191392 (0.217304) | 0.141674 / 0.680424 (-0.538750) | 0.016815 / 0.534201 (-0.517386) | 0.292453 / 0.579283 (-0.286830) | 0.277577 / 0.434364 (-0.156787) | 0.325024 / 0.540337 (-0.215313) | 0.433181 / 1.386936 (-0.953755) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b7a16a08c940e65397305aec5f1b484d91cee75a \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6712/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6712 | https://github.com/huggingface/datasets/pull/6712 | true |
2,165,507,817 | https://api.github.com/repos/huggingface/datasets/issues/6711/labels{/name} | I was preparing some datasets for AI training and noticed that `datasets` by HuggingFace uses the conventional `open` mechanism to read the file and split it into chunks. I thought it can be significantly accelerated, and [started with a benchmark](https://gist.github.com/ashvardanian/55c2052e9f78b05b8d614aa90cb12347):
```sh
$ pip install --upgrade --force-reinstall datasets
$ python benchmark_huggingface_datasets.py xlsum.csv
Generating train split: 1004598 examples [00:47, 21116.16 examples/s]
Time taken to load the dataset: 48.66838526725769 seconds
Time taken to chunk the dataset into parts of size 10000: 0.11466407775878906 seconds
Total time taken: 48.78304934501648 seconds
```
For benchmarks I've used a [large CSV file with mixed UTF-8 content](https://github.com/ashvardanian/StringZilla/blob/main/CONTRIBUTING.md#benchmarking-datasets), most common in modern large-scale pre-training pipelines. I've later patched the `datasets` library to use `stringzilla`, which resulted in significantly lower memory consumption and in 2.9x throughput improvement on the AWS `r7iz` instances. That's using slow SSDs mounted over the network. Performance on local SSDs on something like a DGX-H100 should be even higher:
```sh
$ pip install -e .
$ python benchmark_huggingface_datasets.py xlsum.csv
Generating train split: 1004598 examples [00:15, 64529.90 examples/s]
Time taken to load the dataset: 16.45028805732727 seconds
Time taken to chunk the dataset into parts of size 10000: 0.1291060447692871 seconds
Total time taken: 16.579394102096558 seconds
```
I've already [pushed the patches to my fork](https://github.com/ashvardanian/datasets/tree/faster-text-parsers), and would love to contribute them to the upstream repository.
---
All the tests pass, but they leave a couple of important questions open. The default Python `open(..., newline=None)` uses universal newlines, where `\n`, `\r`, and `\r\n` are all converted to `\n` on the fly. I am not sure if its a good idea for a general purpose dataset preparation pipeline?
I can simulate the same behavior (which I don't yet do) for `"line"` splitter. Adjusting it for `"paragraph"`-splitter would be harder. Should we stick exactly to the old Pythonic behavior or stay closer to how C and other programming languages do that? | 2024-03-04T15:15:51Z | 6,711 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-03T19:03:04Z | https://api.github.com/repos/huggingface/datasets/issues/6711/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6711/timeline | 3x Faster Text Preprocessing | https://api.github.com/repos/huggingface/datasets/issues/6711/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1983160?v=4",
"events_url": "https://api.github.com/users/ashvardanian/events{/privacy}",
"followers_url": "https://api.github.com/users/ashvardanian/followers",
"following_url": "https://api.github.com/users/ashvardanian/following{/other_user}",
"gists_url": "https://api.github.com/users/ashvardanian/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ashvardanian",
"id": 1983160,
"login": "ashvardanian",
"node_id": "MDQ6VXNlcjE5ODMxNjA=",
"organizations_url": "https://api.github.com/users/ashvardanian/orgs",
"received_events_url": "https://api.github.com/users/ashvardanian/received_events",
"repos_url": "https://api.github.com/users/ashvardanian/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ashvardanian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashvardanian/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ashvardanian"
} | [] | null | null | NONE | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6711.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6711",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6711.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6711"
} | PR_kwDODunzps5ohM1a | [
"Unfortunately, that won't improve the performance. StringZilla repository has extensive benchmarks comparing against different built-in functionality of several programming languages. Using `re.finditer` for tokenization is practically the slowest anti-pattern I've encountered in any language. The gap between that and a SIMD-accelerated kernel can be as big as 10 MB/s vs 10 GB/s.\n\nI understand the need to keep the dependencies minimal. It helps the package remain small and portable. At this point, StringZilla provides 105 binaries for different OS and hardware versions (more portable than NumPy) and the [binary size generally ranges from 50 KB to 250 KB](https://pypi.org/project/stringzilla/), smaller than a single JPEG. \n",
"The `text` builder is not very popular, so I'm also not a fan of introducing a dependency for it.\r\n\r\nMoreover, I couldn't find any projects of this size/usage depending on StringZilla (with GitHub search), so we should at least wait for its greater adoption to merge this PR.\r\n"
] | {
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6711/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6711 | https://github.com/huggingface/datasets/pull/6711 | true |
2,164,781,564 | https://api.github.com/repos/huggingface/datasets/issues/6710/labels{/name} | Use shared memory for the IterableDataset epoch.
This way calling `ds.set_epoch()` in the main process will update the epoch in the DataLoader workers as well.
This is useful especially because the epoch is used to compute the `effective_seed` used for shuffling.
I used torch's shared memory in case users want to send dataset copies without shared memory using pickle. I also find it easier to use than using `multiprocessing.shared_memory` than requires unlinking only in the main process, or `mp.Value` that is not picklable.
close https://github.com/huggingface/datasets/issues/6673
cc @rwightman | 2024-03-06T14:41:54Z | 6,710 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-02T12:08:50Z | https://api.github.com/repos/huggingface/datasets/issues/6710/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6710/timeline | Persist IterableDataset epoch in workers | https://api.github.com/repos/huggingface/datasets/issues/6710/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6710.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6710",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6710.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6710"
} | PR_kwDODunzps5oe4ov | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6710). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6710/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6710 | https://github.com/huggingface/datasets/pull/6710 | true |
2,164,169,913 | https://api.github.com/repos/huggingface/datasets/issues/6709/labels{/name} | null | 2024-03-01T21:07:35Z | 6,709 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-01T21:01:14Z | https://api.github.com/repos/huggingface/datasets/issues/6709/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6709/timeline | set dev version | https://api.github.com/repos/huggingface/datasets/issues/6709/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2024-03-01T21:01:23Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6709.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6709",
"merged_at": "2024-03-01T21:01:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6709.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6709"
} | PR_kwDODunzps5oc2Fg | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6709). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005081 / 0.011353 (-0.006272) | 0.004182 / 0.011008 (-0.006826) | 0.063377 / 0.038508 (0.024869) | 0.027880 / 0.023109 (0.004770) | 0.247260 / 0.275898 (-0.028638) | 0.273580 / 0.323480 (-0.049900) | 0.002995 / 0.007986 (-0.004991) | 0.002804 / 0.004328 (-0.001524) | 0.049669 / 0.004250 (0.045418) | 0.042469 / 0.037052 (0.005417) | 0.268606 / 0.258489 (0.010117) | 0.292867 / 0.293841 (-0.000973) | 0.028077 / 0.128546 (-0.100469) | 0.011031 / 0.075646 (-0.064615) | 0.210225 / 0.419271 (-0.209047) | 0.035723 / 0.043533 (-0.007810) | 0.252131 / 0.255139 (-0.003008) | 0.272895 / 0.283200 (-0.010304) | 0.019809 / 0.141683 (-0.121874) | 1.138500 / 1.452155 (-0.313655) | 1.167752 / 1.492716 (-0.324964) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094881 / 0.018006 (0.076875) | 0.300168 / 0.000490 (0.299678) | 0.000207 / 0.000200 (0.000007) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017917 / 0.037411 (-0.019494) | 0.061854 / 0.014526 (0.047328) | 0.074481 / 0.176557 (-0.102075) | 0.120075 / 0.737135 (-0.617061) | 0.074627 / 0.296338 (-0.221711) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287888 / 0.215209 (0.072679) | 2.770165 / 2.077655 (0.692510) | 1.500071 / 1.504120 (-0.004049) | 1.374857 / 1.541195 (-0.166338) | 1.427291 / 1.468490 (-0.041200) | 0.558431 / 4.584777 (-4.026346) | 2.439352 / 3.745712 (-1.306361) | 2.787471 / 5.269862 (-2.482391) | 1.742636 / 4.565676 (-2.823041) | 0.061716 / 0.424275 (-0.362559) | 0.004961 / 0.007607 (-0.002646) | 0.345209 / 0.226044 (0.119164) | 3.360253 / 2.268929 (1.091325) | 1.847945 / 55.444624 (-53.596680) | 1.595733 / 6.876477 (-5.280744) | 1.642350 / 2.142072 (-0.499723) | 0.638639 / 4.805227 (-4.166588) | 0.116918 / 6.500664 (-6.383746) | 0.042132 / 0.075469 (-0.033338) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.980602 / 1.841788 (-0.861185) | 11.545402 / 8.074308 (3.471094) | 9.452471 / 10.191392 (-0.738921) | 0.129930 / 0.680424 (-0.550494) | 0.014143 / 0.534201 (-0.520058) | 0.290302 / 0.579283 (-0.288981) | 0.263785 / 0.434364 (-0.170579) | 0.339580 / 0.540337 (-0.200758) | 0.450355 / 1.386936 (-0.936581) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005565 / 0.011353 (-0.005788) | 0.003764 / 0.011008 (-0.007244) | 0.050082 / 0.038508 (0.011574) | 0.030354 / 0.023109 (0.007245) | 0.250609 / 0.275898 (-0.025289) | 0.277200 / 0.323480 (-0.046280) | 0.004276 / 0.007986 (-0.003710) | 0.002805 / 0.004328 (-0.001523) | 0.048765 / 0.004250 (0.044514) | 0.045477 / 0.037052 (0.008425) | 0.267704 / 0.258489 (0.009215) | 0.303214 / 0.293841 (0.009373) | 0.029393 / 0.128546 (-0.099153) | 0.010623 / 0.075646 (-0.065023) | 0.058201 / 0.419271 (-0.361070) | 0.053131 / 0.043533 (0.009599) | 0.258682 / 0.255139 (0.003543) | 0.276069 / 0.283200 (-0.007131) | 0.018260 / 0.141683 (-0.123423) | 1.141542 / 1.452155 (-0.310613) | 1.185780 / 1.492716 (-0.306936) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096857 / 0.018006 (0.078850) | 0.300656 / 0.000490 (0.300167) | 0.000450 / 0.000200 (0.000250) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022416 / 0.037411 (-0.014995) | 0.074781 / 0.014526 (0.060255) | 0.087299 / 0.176557 (-0.089257) | 0.127616 / 0.737135 (-0.609519) | 0.088382 / 0.296338 (-0.207957) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298639 / 0.215209 (0.083430) | 2.940002 / 2.077655 (0.862347) | 1.709707 / 1.504120 (0.205587) | 1.556502 / 1.541195 (0.015307) | 1.592841 / 1.468490 (0.124351) | 0.570237 / 4.584777 (-4.014539) | 2.467576 / 3.745712 (-1.278137) | 2.741021 / 5.269862 (-2.528840) | 1.776526 / 4.565676 (-2.789151) | 0.063999 / 0.424275 (-0.360276) | 0.005068 / 0.007607 (-0.002539) | 0.360727 / 0.226044 (0.134682) | 3.535404 / 2.268929 (1.266476) | 2.035345 / 55.444624 (-53.409279) | 1.755916 / 6.876477 (-5.120561) | 1.889281 / 2.142072 (-0.252791) | 0.649025 / 4.805227 (-4.156202) | 0.118210 / 6.500664 (-6.382454) | 0.040815 / 0.075469 (-0.034654) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005650 / 1.841788 (-0.836138) | 12.228314 / 8.074308 (4.154006) | 10.147363 / 10.191392 (-0.044029) | 0.159258 / 0.680424 (-0.521166) | 0.015288 / 0.534201 (-0.518913) | 0.288144 / 0.579283 (-0.291139) | 0.281319 / 0.434364 (-0.153045) | 0.323380 / 0.540337 (-0.216958) | 0.426887 / 1.386936 (-0.960049) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8b04ccb486f3831b4b0d2474119823efa3815709 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6709/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6709 | https://github.com/huggingface/datasets/pull/6709 | true |
2,164,158,579 | https://api.github.com/repos/huggingface/datasets/issues/6708/labels{/name} | null | 2024-03-01T21:03:01Z | 6,708 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-01T20:52:17Z | https://api.github.com/repos/huggingface/datasets/issues/6708/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6708/timeline | Release: 2.18.0 | https://api.github.com/repos/huggingface/datasets/issues/6708/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2024-03-01T20:56:50Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6708.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6708",
"merged_at": "2024-03-01T20:56:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6708.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6708"
} | PR_kwDODunzps5oczmi | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6708). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005442 / 0.011353 (-0.005910) | 0.003796 / 0.011008 (-0.007213) | 0.063663 / 0.038508 (0.025155) | 0.028901 / 0.023109 (0.005792) | 0.256742 / 0.275898 (-0.019156) | 0.279555 / 0.323480 (-0.043925) | 0.004128 / 0.007986 (-0.003858) | 0.002789 / 0.004328 (-0.001539) | 0.049463 / 0.004250 (0.045213) | 0.043461 / 0.037052 (0.006409) | 0.272975 / 0.258489 (0.014486) | 0.299057 / 0.293841 (0.005216) | 0.029030 / 0.128546 (-0.099516) | 0.010453 / 0.075646 (-0.065193) | 0.207611 / 0.419271 (-0.211660) | 0.037200 / 0.043533 (-0.006332) | 0.258327 / 0.255139 (0.003188) | 0.279746 / 0.283200 (-0.003454) | 0.018940 / 0.141683 (-0.122743) | 1.150379 / 1.452155 (-0.301776) | 1.217621 / 1.492716 (-0.275095) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095115 / 0.018006 (0.077109) | 0.299393 / 0.000490 (0.298903) | 0.000223 / 0.000200 (0.000023) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018972 / 0.037411 (-0.018439) | 0.061669 / 0.014526 (0.047143) | 0.075605 / 0.176557 (-0.100951) | 0.125695 / 0.737135 (-0.611440) | 0.076654 / 0.296338 (-0.219684) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286431 / 0.215209 (0.071222) | 2.763554 / 2.077655 (0.685899) | 1.489902 / 1.504120 (-0.014218) | 1.375082 / 1.541195 (-0.166113) | 1.418903 / 1.468490 (-0.049587) | 0.555646 / 4.584777 (-4.029131) | 2.410578 / 3.745712 (-1.335134) | 2.827453 / 5.269862 (-2.442408) | 1.764381 / 4.565676 (-2.801295) | 0.062937 / 0.424275 (-0.361339) | 0.004989 / 0.007607 (-0.002619) | 0.342115 / 0.226044 (0.116071) | 3.354660 / 2.268929 (1.085732) | 1.858418 / 55.444624 (-53.586206) | 1.586403 / 6.876477 (-5.290074) | 1.625762 / 2.142072 (-0.516311) | 0.643678 / 4.805227 (-4.161550) | 0.116764 / 6.500664 (-6.383900) | 0.042198 / 0.075469 (-0.033271) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974953 / 1.841788 (-0.866835) | 11.748419 / 8.074308 (3.674111) | 9.753700 / 10.191392 (-0.437692) | 0.131330 / 0.680424 (-0.549094) | 0.018876 / 0.534201 (-0.515325) | 0.290078 / 0.579283 (-0.289205) | 0.264676 / 0.434364 (-0.169688) | 0.340285 / 0.540337 (-0.200052) | 0.445340 / 1.386936 (-0.941596) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005513 / 0.011353 (-0.005840) | 0.003665 / 0.011008 (-0.007344) | 0.049368 / 0.038508 (0.010860) | 0.032045 / 0.023109 (0.008936) | 0.280955 / 0.275898 (0.005057) | 0.299804 / 0.323480 (-0.023675) | 0.004391 / 0.007986 (-0.003594) | 0.002896 / 0.004328 (-0.001432) | 0.048914 / 0.004250 (0.044663) | 0.045448 / 0.037052 (0.008396) | 0.298779 / 0.258489 (0.040289) | 0.322012 / 0.293841 (0.028171) | 0.029449 / 0.128546 (-0.099097) | 0.010410 / 0.075646 (-0.065236) | 0.057867 / 0.419271 (-0.361405) | 0.053944 / 0.043533 (0.010411) | 0.278139 / 0.255139 (0.023000) | 0.297453 / 0.283200 (0.014254) | 0.018746 / 0.141683 (-0.122937) | 1.137890 / 1.452155 (-0.314264) | 1.206109 / 1.492716 (-0.286607) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091741 / 0.018006 (0.073735) | 0.300415 / 0.000490 (0.299925) | 0.000214 / 0.000200 (0.000014) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022097 / 0.037411 (-0.015314) | 0.076853 / 0.014526 (0.062327) | 0.088440 / 0.176557 (-0.088116) | 0.127176 / 0.737135 (-0.609959) | 0.088976 / 0.296338 (-0.207363) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300754 / 0.215209 (0.085545) | 2.917713 / 2.077655 (0.840058) | 1.619338 / 1.504120 (0.115218) | 1.501543 / 1.541195 (-0.039652) | 1.506032 / 1.468490 (0.037542) | 0.579481 / 4.584777 (-4.005296) | 2.458917 / 3.745712 (-1.286795) | 2.754621 / 5.269862 (-2.515241) | 1.796440 / 4.565676 (-2.769237) | 0.067547 / 0.424275 (-0.356728) | 0.005001 / 0.007607 (-0.002606) | 0.351030 / 0.226044 (0.124985) | 3.466282 / 2.268929 (1.197353) | 1.954661 / 55.444624 (-53.489964) | 1.688737 / 6.876477 (-5.187740) | 1.836762 / 2.142072 (-0.305311) | 0.656441 / 4.805227 (-4.148786) | 0.118258 / 6.500664 (-6.382406) | 0.041608 / 0.075469 (-0.033861) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.999696 / 1.841788 (-0.842092) | 12.383471 / 8.074308 (4.309162) | 10.338488 / 10.191392 (0.147096) | 0.150214 / 0.680424 (-0.530210) | 0.014997 / 0.534201 (-0.519204) | 0.288949 / 0.579283 (-0.290334) | 0.272012 / 0.434364 (-0.162352) | 0.327253 / 0.540337 (-0.213084) | 0.427594 / 1.386936 (-0.959342) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ca8409a8bec4508255b9c3e808d0751eb1005260 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6708/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6708 | https://github.com/huggingface/datasets/pull/6708 | true |
2,163,799,868 | https://api.github.com/repos/huggingface/datasets/issues/6707/labels{/name} | null | 2024-03-01T17:32:14Z | 6,707 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-01T16:52:29Z | https://api.github.com/repos/huggingface/datasets/issues/6707/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6707/timeline | Silence ruff deprecation messages | https://api.github.com/repos/huggingface/datasets/issues/6707/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2024-03-01T17:25:46Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6707.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6707",
"merged_at": "2024-03-01T17:25:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6707.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6707"
} | PR_kwDODunzps5obkhA | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6707). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004728 / 0.011353 (-0.006624) | 0.002941 / 0.011008 (-0.008067) | 0.058270 / 0.038508 (0.019762) | 0.027418 / 0.023109 (0.004309) | 0.224993 / 0.275898 (-0.050905) | 0.243103 / 0.323480 (-0.080377) | 0.004668 / 0.007986 (-0.003318) | 0.002499 / 0.004328 (-0.001829) | 0.045020 / 0.004250 (0.040770) | 0.038006 / 0.037052 (0.000953) | 0.240807 / 0.258489 (-0.017682) | 0.264554 / 0.293841 (-0.029287) | 0.027018 / 0.128546 (-0.101529) | 0.009866 / 0.075646 (-0.065780) | 0.196578 / 0.419271 (-0.222694) | 0.034536 / 0.043533 (-0.008997) | 0.236535 / 0.255139 (-0.018604) | 0.248879 / 0.283200 (-0.034321) | 0.017140 / 0.141683 (-0.124543) | 1.046927 / 1.452155 (-0.405228) | 1.121209 / 1.492716 (-0.371507) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088267 / 0.018006 (0.070261) | 0.279774 / 0.000490 (0.279284) | 0.000214 / 0.000200 (0.000014) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.016521 / 0.037411 (-0.020890) | 0.056499 / 0.014526 (0.041974) | 0.067264 / 0.176557 (-0.109293) | 0.117270 / 0.737135 (-0.619865) | 0.069284 / 0.296338 (-0.227055) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.260679 / 0.215209 (0.045470) | 2.608971 / 2.077655 (0.531316) | 1.363139 / 1.504120 (-0.140981) | 1.262128 / 1.541195 (-0.279067) | 1.273619 / 1.468490 (-0.194871) | 0.523417 / 4.584777 (-4.061360) | 2.291145 / 3.745712 (-1.454567) | 2.540603 / 5.269862 (-2.729258) | 1.599090 / 4.565676 (-2.966586) | 0.058170 / 0.424275 (-0.366105) | 0.004556 / 0.007607 (-0.003051) | 0.308361 / 0.226044 (0.082316) | 3.069269 / 2.268929 (0.800340) | 1.698064 / 55.444624 (-53.746560) | 1.426631 / 6.876477 (-5.449846) | 1.463913 / 2.142072 (-0.678160) | 0.595234 / 4.805227 (-4.209993) | 0.107202 / 6.500664 (-6.393462) | 0.038183 / 0.075469 (-0.037286) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.905999 / 1.841788 (-0.935789) | 10.828492 / 8.074308 (2.754184) | 8.705635 / 10.191392 (-1.485757) | 0.121203 / 0.680424 (-0.559221) | 0.013789 / 0.534201 (-0.520412) | 0.268172 / 0.579283 (-0.311111) | 0.254277 / 0.434364 (-0.180086) | 0.310280 / 0.540337 (-0.230057) | 0.410490 / 1.386936 (-0.976446) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004696 / 0.011353 (-0.006657) | 0.002982 / 0.011008 (-0.008026) | 0.045333 / 0.038508 (0.006825) | 0.027483 / 0.023109 (0.004374) | 0.253438 / 0.275898 (-0.022460) | 0.272657 / 0.323480 (-0.050823) | 0.004060 / 0.007986 (-0.003926) | 0.002574 / 0.004328 (-0.001754) | 0.045462 / 0.004250 (0.041212) | 0.041260 / 0.037052 (0.004208) | 0.267919 / 0.258489 (0.009430) | 0.290935 / 0.293841 (-0.002906) | 0.026674 / 0.128546 (-0.101873) | 0.009370 / 0.075646 (-0.066276) | 0.053543 / 0.419271 (-0.365729) | 0.047390 / 0.043533 (0.003857) | 0.255774 / 0.255139 (0.000635) | 0.273909 / 0.283200 (-0.009291) | 0.017252 / 0.141683 (-0.124431) | 1.064298 / 1.452155 (-0.387857) | 1.125374 / 1.492716 (-0.367342) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091506 / 0.018006 (0.073499) | 0.298570 / 0.000490 (0.298080) | 0.000741 / 0.000200 (0.000541) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020232 / 0.037411 (-0.017179) | 0.069640 / 0.014526 (0.055114) | 0.081360 / 0.176557 (-0.095197) | 0.116955 / 0.737135 (-0.620180) | 0.080920 / 0.296338 (-0.215418) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283000 / 0.215209 (0.067791) | 2.802526 / 2.077655 (0.724871) | 1.534631 / 1.504120 (0.030511) | 1.407260 / 1.541195 (-0.133935) | 1.409111 / 1.468490 (-0.059379) | 0.534892 / 4.584777 (-4.049885) | 2.350516 / 3.745712 (-1.395196) | 2.550444 / 5.269862 (-2.719418) | 1.661747 / 4.565676 (-2.903930) | 0.060978 / 0.424275 (-0.363297) | 0.005300 / 0.007607 (-0.002308) | 0.367418 / 0.226044 (0.141373) | 3.338046 / 2.268929 (1.069117) | 1.883914 / 55.444624 (-53.560710) | 1.638561 / 6.876477 (-5.237916) | 1.751547 / 2.142072 (-0.390526) | 0.633318 / 4.805227 (-4.171910) | 0.114971 / 6.500664 (-6.385693) | 0.040202 / 0.075469 (-0.035267) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962053 / 1.841788 (-0.879735) | 11.600643 / 8.074308 (3.526334) | 9.526461 / 10.191392 (-0.664931) | 0.123909 / 0.680424 (-0.556515) | 0.015944 / 0.534201 (-0.518257) | 0.271542 / 0.579283 (-0.307741) | 0.254366 / 0.434364 (-0.179998) | 0.300499 / 0.540337 (-0.239838) | 0.409122 / 1.386936 (-0.977814) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0acb27347a3c03efde612023235201a777e08e72 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6707/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6707 | https://github.com/huggingface/datasets/pull/6707 | true |
2,163,783,123 | https://api.github.com/repos/huggingface/datasets/issues/6706/labels{/name} | null | 2024-03-01T17:02:13Z | 6,706 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-01T16:44:58Z | https://api.github.com/repos/huggingface/datasets/issues/6706/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6706/timeline | Update ruff | https://api.github.com/repos/huggingface/datasets/issues/6706/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2024-03-01T16:52:17Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6706.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6706",
"merged_at": "2024-03-01T16:52:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6706.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6706"
} | PR_kwDODunzps5obgt- | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6706). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005014 / 0.011353 (-0.006339) | 0.003324 / 0.011008 (-0.007685) | 0.062501 / 0.038508 (0.023993) | 0.027633 / 0.023109 (0.004524) | 0.245693 / 0.275898 (-0.030205) | 0.271963 / 0.323480 (-0.051517) | 0.003062 / 0.007986 (-0.004923) | 0.002646 / 0.004328 (-0.001683) | 0.049020 / 0.004250 (0.044769) | 0.042381 / 0.037052 (0.005328) | 0.269729 / 0.258489 (0.011240) | 0.289052 / 0.293841 (-0.004789) | 0.027138 / 0.128546 (-0.101408) | 0.010246 / 0.075646 (-0.065400) | 0.205378 / 0.419271 (-0.213893) | 0.035792 / 0.043533 (-0.007741) | 0.247204 / 0.255139 (-0.007935) | 0.271805 / 0.283200 (-0.011394) | 0.019541 / 0.141683 (-0.122142) | 1.129335 / 1.452155 (-0.322820) | 1.174088 / 1.492716 (-0.318629) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091340 / 0.018006 (0.073334) | 0.300037 / 0.000490 (0.299547) | 0.000214 / 0.000200 (0.000014) | 0.000046 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018360 / 0.037411 (-0.019051) | 0.061239 / 0.014526 (0.046713) | 0.072304 / 0.176557 (-0.104253) | 0.118883 / 0.737135 (-0.618253) | 0.073562 / 0.296338 (-0.222777) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284478 / 0.215209 (0.069269) | 2.761819 / 2.077655 (0.684165) | 1.443757 / 1.504120 (-0.060363) | 1.315221 / 1.541195 (-0.225974) | 1.333930 / 1.468490 (-0.134560) | 0.581470 / 4.584777 (-4.003307) | 2.422530 / 3.745712 (-1.323183) | 2.869898 / 5.269862 (-2.399963) | 1.789159 / 4.565676 (-2.776517) | 0.063708 / 0.424275 (-0.360567) | 0.004922 / 0.007607 (-0.002685) | 0.337352 / 0.226044 (0.111307) | 3.290192 / 2.268929 (1.021263) | 1.840192 / 55.444624 (-53.604432) | 1.543008 / 6.876477 (-5.333469) | 1.548947 / 2.142072 (-0.593125) | 0.655129 / 4.805227 (-4.150098) | 0.119010 / 6.500664 (-6.381654) | 0.042583 / 0.075469 (-0.032886) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981333 / 1.841788 (-0.860455) | 11.349564 / 8.074308 (3.275256) | 9.397603 / 10.191392 (-0.793789) | 0.142151 / 0.680424 (-0.538273) | 0.013850 / 0.534201 (-0.520351) | 0.286323 / 0.579283 (-0.292960) | 0.265223 / 0.434364 (-0.169141) | 0.335322 / 0.540337 (-0.205015) | 0.441727 / 1.386936 (-0.945209) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005134 / 0.011353 (-0.006219) | 0.003216 / 0.011008 (-0.007792) | 0.049401 / 0.038508 (0.010893) | 0.031509 / 0.023109 (0.008400) | 0.262211 / 0.275898 (-0.013687) | 0.284814 / 0.323480 (-0.038665) | 0.004165 / 0.007986 (-0.003821) | 0.002693 / 0.004328 (-0.001636) | 0.048088 / 0.004250 (0.043838) | 0.043609 / 0.037052 (0.006557) | 0.271126 / 0.258489 (0.012637) | 0.301374 / 0.293841 (0.007533) | 0.028891 / 0.128546 (-0.099655) | 0.009911 / 0.075646 (-0.065735) | 0.057334 / 0.419271 (-0.361938) | 0.050936 / 0.043533 (0.007403) | 0.258883 / 0.255139 (0.003744) | 0.282884 / 0.283200 (-0.000315) | 0.017475 / 0.141683 (-0.124208) | 1.167562 / 1.452155 (-0.284593) | 1.214081 / 1.492716 (-0.278636) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096890 / 0.018006 (0.078884) | 0.315819 / 0.000490 (0.315329) | 0.000218 / 0.000200 (0.000018) | 0.000054 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021775 / 0.037411 (-0.015637) | 0.075816 / 0.014526 (0.061290) | 0.086992 / 0.176557 (-0.089564) | 0.125816 / 0.737135 (-0.611319) | 0.090343 / 0.296338 (-0.205995) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295204 / 0.215209 (0.079995) | 2.903129 / 2.077655 (0.825475) | 1.629838 / 1.504120 (0.125718) | 1.531862 / 1.541195 (-0.009332) | 1.504614 / 1.468490 (0.036123) | 0.572910 / 4.584777 (-4.011867) | 2.482555 / 3.745712 (-1.263157) | 2.637259 / 5.269862 (-2.632603) | 1.733049 / 4.565676 (-2.832628) | 0.063239 / 0.424275 (-0.361036) | 0.005037 / 0.007607 (-0.002570) | 0.346657 / 0.226044 (0.120612) | 3.446469 / 2.268929 (1.177540) | 2.017864 / 55.444624 (-53.426761) | 1.688704 / 6.876477 (-5.187773) | 1.790813 / 2.142072 (-0.351259) | 0.660769 / 4.805227 (-4.144458) | 0.115582 / 6.500664 (-6.385082) | 0.040111 / 0.075469 (-0.035358) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.058089 / 1.841788 (-0.783699) | 11.998171 / 8.074308 (3.923863) | 10.459128 / 10.191392 (0.267736) | 0.149653 / 0.680424 (-0.530771) | 0.015015 / 0.534201 (-0.519186) | 0.289973 / 0.579283 (-0.289310) | 0.274217 / 0.434364 (-0.160147) | 0.351057 / 0.540337 (-0.189281) | 0.434295 / 1.386936 (-0.952641) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cb33582eb2262cec5ec6f238b50a7d043ab3ca94 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6706/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6706 | https://github.com/huggingface/datasets/pull/6706 | true |
2,163,768,640 | https://api.github.com/repos/huggingface/datasets/issues/6705/labels{/name} | This code should not return empty data files
```python
from datasets import load_dataset_builder
revision = "3d406e70bc21c3ca92a9a229b4c6fc3ed88279fd"
b = load_dataset_builder("bigcode/the-stack-v2-dedup", data_dir="data/Dockerfile", revision=revision)
print(b.config.data_files)
```
Previously it would return no data files because it would apply the YAML `data_files: data/**/train-*` pattern to this directory
cc @anton-l | 2024-03-01T18:59:06Z | 6,705 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-01T16:38:53Z | https://api.github.com/repos/huggingface/datasets/issues/6705/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6705/timeline | Fix data_files when passing data_dir | https://api.github.com/repos/huggingface/datasets/issues/6705/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2024-03-01T18:52:49Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6705.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6705",
"merged_at": "2024-03-01T18:52:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6705.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6705"
} | PR_kwDODunzps5obdbY | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6705). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005014 / 0.011353 (-0.006339) | 0.003371 / 0.011008 (-0.007637) | 0.063622 / 0.038508 (0.025114) | 0.026551 / 0.023109 (0.003442) | 0.244602 / 0.275898 (-0.031296) | 0.269981 / 0.323480 (-0.053499) | 0.003959 / 0.007986 (-0.004027) | 0.002678 / 0.004328 (-0.001650) | 0.049421 / 0.004250 (0.045170) | 0.039926 / 0.037052 (0.002873) | 0.256609 / 0.258489 (-0.001881) | 0.281934 / 0.293841 (-0.011907) | 0.027794 / 0.128546 (-0.100752) | 0.010130 / 0.075646 (-0.065516) | 0.207471 / 0.419271 (-0.211800) | 0.035423 / 0.043533 (-0.008110) | 0.246987 / 0.255139 (-0.008152) | 0.265413 / 0.283200 (-0.017787) | 0.018287 / 0.141683 (-0.123396) | 1.117550 / 1.452155 (-0.334604) | 1.151713 / 1.492716 (-0.341003) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095632 / 0.018006 (0.077626) | 0.304315 / 0.000490 (0.303825) | 0.000214 / 0.000200 (0.000014) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018591 / 0.037411 (-0.018820) | 0.062081 / 0.014526 (0.047555) | 0.075137 / 0.176557 (-0.101420) | 0.119116 / 0.737135 (-0.618020) | 0.075254 / 0.296338 (-0.221085) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286161 / 0.215209 (0.070952) | 2.793824 / 2.077655 (0.716169) | 1.492523 / 1.504120 (-0.011597) | 1.372158 / 1.541195 (-0.169037) | 1.385921 / 1.468490 (-0.082569) | 0.568700 / 4.584777 (-4.016077) | 2.340451 / 3.745712 (-1.405262) | 2.712022 / 5.269862 (-2.557840) | 1.712479 / 4.565676 (-2.853197) | 0.060906 / 0.424275 (-0.363369) | 0.004909 / 0.007607 (-0.002698) | 0.338227 / 0.226044 (0.112182) | 3.331329 / 2.268929 (1.062400) | 1.845646 / 55.444624 (-53.598978) | 1.559384 / 6.876477 (-5.317093) | 1.577683 / 2.142072 (-0.564390) | 0.629367 / 4.805227 (-4.175860) | 0.118645 / 6.500664 (-6.382019) | 0.041517 / 0.075469 (-0.033952) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962237 / 1.841788 (-0.879551) | 11.232566 / 8.074308 (3.158258) | 9.627141 / 10.191392 (-0.564251) | 0.129732 / 0.680424 (-0.550692) | 0.013701 / 0.534201 (-0.520500) | 0.291869 / 0.579283 (-0.287414) | 0.269298 / 0.434364 (-0.165066) | 0.342502 / 0.540337 (-0.197835) | 0.455891 / 1.386936 (-0.931045) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005256 / 0.011353 (-0.006097) | 0.003419 / 0.011008 (-0.007589) | 0.049681 / 0.038508 (0.011173) | 0.029566 / 0.023109 (0.006457) | 0.268010 / 0.275898 (-0.007888) | 0.293721 / 0.323480 (-0.029759) | 0.004249 / 0.007986 (-0.003737) | 0.002643 / 0.004328 (-0.001685) | 0.048758 / 0.004250 (0.044508) | 0.044294 / 0.037052 (0.007241) | 0.279584 / 0.258489 (0.021095) | 0.311150 / 0.293841 (0.017309) | 0.029443 / 0.128546 (-0.099103) | 0.010314 / 0.075646 (-0.065333) | 0.057770 / 0.419271 (-0.361501) | 0.050953 / 0.043533 (0.007420) | 0.268283 / 0.255139 (0.013144) | 0.289155 / 0.283200 (0.005956) | 0.017742 / 0.141683 (-0.123941) | 1.163963 / 1.452155 (-0.288192) | 1.200580 / 1.492716 (-0.292136) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096365 / 0.018006 (0.078359) | 0.307257 / 0.000490 (0.306767) | 0.000265 / 0.000200 (0.000065) | 0.000049 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021862 / 0.037411 (-0.015550) | 0.075502 / 0.014526 (0.060976) | 0.087800 / 0.176557 (-0.088756) | 0.125468 / 0.737135 (-0.611667) | 0.088207 / 0.296338 (-0.208132) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.324184 / 0.215209 (0.108975) | 3.198442 / 2.077655 (1.120787) | 1.862801 / 1.504120 (0.358682) | 1.728637 / 1.541195 (0.187443) | 1.727997 / 1.468490 (0.259507) | 0.571590 / 4.584777 (-4.013187) | 2.448661 / 3.745712 (-1.297051) | 2.665943 / 5.269862 (-2.603919) | 1.731718 / 4.565676 (-2.833958) | 0.063644 / 0.424275 (-0.360631) | 0.004989 / 0.007607 (-0.002619) | 0.364543 / 0.226044 (0.138498) | 3.615859 / 2.268929 (1.346930) | 2.131637 / 55.444624 (-53.312987) | 1.857317 / 6.876477 (-5.019159) | 1.992813 / 2.142072 (-0.149260) | 0.654662 / 4.805227 (-4.150565) | 0.117631 / 6.500664 (-6.383034) | 0.040934 / 0.075469 (-0.034535) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.013802 / 1.841788 (-0.827985) | 11.899873 / 8.074308 (3.825565) | 10.291297 / 10.191392 (0.099905) | 0.155245 / 0.680424 (-0.525179) | 0.014449 / 0.534201 (-0.519752) | 0.286331 / 0.579283 (-0.292952) | 0.273111 / 0.434364 (-0.161253) | 0.321182 / 0.540337 (-0.219155) | 0.433406 / 1.386936 (-0.953530) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e5406f9a9a453f2c0614c2ee26975e5973edc278 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6705/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6705 | https://github.com/huggingface/datasets/pull/6705 | true |
2,163,752,391 | https://api.github.com/repos/huggingface/datasets/issues/6704/labels{/name} | Separate the default patterns that match directories from the ones matching files and ensure directories are checked first (reverts the change from https://github.com/huggingface/datasets/pull/6244, which merged these patterns). Also, ensure that the glob patterns do not overlap to avoid duplicates in the result.
Additionally, replace `get_fs_token_paths` with `url_to_fs` to avoid [unnecessary glob calls](https://github.com/fsspec/filesystem_spec/blob/14dce8ca78f7aa509a20edb263bff83a7760c24d/fsspec/core.py#L655-L656).
fix https://github.com/huggingface/datasets/issues/6259
fix https://github.com/huggingface/datasets/issues/6272 | 2024-03-15T15:31:23Z | 6,704 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-01T16:31:25Z | https://api.github.com/repos/huggingface/datasets/issues/6704/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6704/timeline | Improve default patterns resolution | https://api.github.com/repos/huggingface/datasets/issues/6704/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2024-03-15T15:22:03Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6704.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6704",
"merged_at": "2024-03-15T15:22:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6704.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6704"
} | PR_kwDODunzps5obZyj | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6704). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Awesome !\r\n\r\nNote that it can still create duplicates if a path matches several dir patterns, e.g.\r\n\r\n```\r\ndata/train-train/data/txt\r\n```\r\nmatches two dir patterns:\r\n```\r\n**/{keyword}[{sep}]*/**\r\n**/*[{sep}]{keyword}/**\r\n```\r\n\r\nPS: feel free to update your branch, I just updated ruff on `main`",
"Yes, I didn't mention that case on purpose 🙂. One solution would be deprecating the `**/*[{sep}]{keyword}/**` pattern (and eventually removing it). This way, the directory patterns would align more with the filename ones. Or do you think this is too big of a breaking change?",
"I think it's too big of a breaking change yes :/ (and would make the docs / logic more complex for users to get imo) Though I think your approach is already a nice step in the right direction",
"These changes to the `resolve_pattern` function lead to 20-30x faster local file resolution in my benchmarks.",
"Nice ! Though since `fsspec` caches the filesystem, is there a risk when adding new files and reloading a dataset ?\r\n\r\n\r\n```python\r\nwith open(\"my/local/dir/0000.txt\", \"w\") as f:\r\n f.write(\"Hello there\")\r\nd1 = load_dataset(\"my/local/dir\")\r\nwith open(\"my/local/dir/0001.txt\", \"w\") as f:\r\n f.write(\"General Kenobi\")\r\nd2 = load_dataset(\"my/local/dir\")\r\nassert list(d1) != list(d2)\r\n```",
"Yes. But I think I have a solution for this.",
"I'm not satisfied with the context manager approach...\r\n\r\nA clean solution would require a bigger rewrite of the resolution logic (e.g., merging `get_data_patterns` and `DataFilesDict.from_patterns` into a `get_data_files` function that would build the `DataFilesDict` by matching the paths using `fs.find` and `fsspec.utils.glob_translate` (available in `fsspec>=2023.12.0`))\r\n\r\nThe current changes make the local resolution 2-3x faster, which is good enough for now, I think.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004888 / 0.011353 (-0.006465) | 0.003267 / 0.011008 (-0.007742) | 0.065117 / 0.038508 (0.026609) | 0.029416 / 0.023109 (0.006306) | 0.232021 / 0.275898 (-0.043877) | 0.258053 / 0.323480 (-0.065427) | 0.003971 / 0.007986 (-0.004014) | 0.002550 / 0.004328 (-0.001779) | 0.049126 / 0.004250 (0.044876) | 0.040620 / 0.037052 (0.003568) | 0.253437 / 0.258489 (-0.005052) | 0.273583 / 0.293841 (-0.020258) | 0.026775 / 0.128546 (-0.101771) | 0.010073 / 0.075646 (-0.065573) | 0.219089 / 0.419271 (-0.200183) | 0.035047 / 0.043533 (-0.008486) | 0.247661 / 0.255139 (-0.007478) | 0.258674 / 0.283200 (-0.024525) | 0.018428 / 0.141683 (-0.123255) | 1.130394 / 1.452155 (-0.321761) | 1.173167 / 1.492716 (-0.319549) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092581 / 0.018006 (0.074574) | 0.303657 / 0.000490 (0.303167) | 0.000215 / 0.000200 (0.000015) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018640 / 0.037411 (-0.018771) | 0.062032 / 0.014526 (0.047506) | 0.073982 / 0.176557 (-0.102575) | 0.121499 / 0.737135 (-0.615636) | 0.076780 / 0.296338 (-0.219559) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279411 / 0.215209 (0.064202) | 2.737977 / 2.077655 (0.660322) | 1.454135 / 1.504120 (-0.049985) | 1.343144 / 1.541195 (-0.198051) | 1.339876 / 1.468490 (-0.128614) | 0.567306 / 4.584777 (-4.017471) | 2.372569 / 3.745712 (-1.373143) | 2.716810 / 5.269862 (-2.553052) | 1.697895 / 4.565676 (-2.867782) | 0.061804 / 0.424275 (-0.362471) | 0.004986 / 0.007607 (-0.002622) | 0.332721 / 0.226044 (0.106676) | 3.274572 / 2.268929 (1.005644) | 1.789900 / 55.444624 (-53.654725) | 1.536346 / 6.876477 (-5.340131) | 1.551940 / 2.142072 (-0.590132) | 0.634539 / 4.805227 (-4.170688) | 0.115860 / 6.500664 (-6.384805) | 0.041737 / 0.075469 (-0.033732) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.024469 / 1.841788 (-0.817319) | 11.327496 / 8.074308 (3.253188) | 9.265855 / 10.191392 (-0.925537) | 0.142200 / 0.680424 (-0.538224) | 0.013945 / 0.534201 (-0.520256) | 0.289670 / 0.579283 (-0.289614) | 0.269240 / 0.434364 (-0.165124) | 0.324748 / 0.540337 (-0.215590) | 0.421393 / 1.386936 (-0.965543) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005284 / 0.011353 (-0.006069) | 0.003351 / 0.011008 (-0.007658) | 0.049973 / 0.038508 (0.011465) | 0.030257 / 0.023109 (0.007148) | 0.273660 / 0.275898 (-0.002238) | 0.300328 / 0.323480 (-0.023152) | 0.004133 / 0.007986 (-0.003852) | 0.002614 / 0.004328 (-0.001715) | 0.048055 / 0.004250 (0.043804) | 0.044731 / 0.037052 (0.007678) | 0.290257 / 0.258489 (0.031768) | 0.321243 / 0.293841 (0.027402) | 0.029542 / 0.128546 (-0.099004) | 0.010074 / 0.075646 (-0.065573) | 0.057944 / 0.419271 (-0.361327) | 0.051267 / 0.043533 (0.007734) | 0.276278 / 0.255139 (0.021139) | 0.302464 / 0.283200 (0.019264) | 0.018231 / 0.141683 (-0.123452) | 1.140782 / 1.452155 (-0.311373) | 1.182991 / 1.492716 (-0.309725) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092325 / 0.018006 (0.074319) | 0.302610 / 0.000490 (0.302121) | 0.000202 / 0.000200 (0.000002) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021458 / 0.037411 (-0.015954) | 0.074883 / 0.014526 (0.060357) | 0.085747 / 0.176557 (-0.090809) | 0.125506 / 0.737135 (-0.611629) | 0.086921 / 0.296338 (-0.209417) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290485 / 0.215209 (0.075276) | 2.853898 / 2.077655 (0.776243) | 1.615606 / 1.504120 (0.111486) | 1.491797 / 1.541195 (-0.049397) | 1.515981 / 1.468490 (0.047491) | 0.566760 / 4.584777 (-4.018017) | 2.462593 / 3.745712 (-1.283119) | 2.765516 / 5.269862 (-2.504345) | 1.755078 / 4.565676 (-2.810598) | 0.063614 / 0.424275 (-0.360661) | 0.005040 / 0.007607 (-0.002567) | 0.347957 / 0.226044 (0.121912) | 3.464258 / 2.268929 (1.195330) | 1.992273 / 55.444624 (-53.452351) | 1.699147 / 6.876477 (-5.177330) | 1.868438 / 2.142072 (-0.273635) | 0.660756 / 4.805227 (-4.144471) | 0.118142 / 6.500664 (-6.382522) | 0.041974 / 0.075469 (-0.033495) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.012206 / 1.841788 (-0.829581) | 12.343735 / 8.074308 (4.269427) | 10.321975 / 10.191392 (0.130583) | 0.140007 / 0.680424 (-0.540417) | 0.015755 / 0.534201 (-0.518446) | 0.291978 / 0.579283 (-0.287305) | 0.278792 / 0.434364 (-0.155572) | 0.325366 / 0.540337 (-0.214972) | 0.439403 / 1.386936 (-0.947533) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d1d3c06a651c6ad5142f331cb5dc0008ddcade33 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6704/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6704 | https://github.com/huggingface/datasets/pull/6704 | true |
2,163,250,590 | https://api.github.com/repos/huggingface/datasets/issues/6703/labels{/name} | ### Describe the bug
I get the following error message: You are trying to load a dataset that was saved using `save_to_disk`. Please use `load_from_disk` instead.
### Steps to reproduce the bug
1. Save a dataset with `save_to_disk`
2. Try to load it with `load_datasets`
### Expected behavior
I am able to load the dataset again with `load_datasets` which most packages uses over `load_from_disk`. I want to have a workaround that allows me to create the same indexing that `push_to_hub` creates for you before using `save_to_disk` - how can that be achieved?
### Environment info
datasets 2.17.1, python 3.10 | 2024-03-04T13:46:20Z | 6,703 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-01T11:59:56Z | https://api.github.com/repos/huggingface/datasets/issues/6703/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6703/timeline | Unable to load dataset that was saved with `save_to_disk` | https://api.github.com/repos/huggingface/datasets/issues/6703/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/27340033?v=4",
"events_url": "https://api.github.com/users/casper-hansen/events{/privacy}",
"followers_url": "https://api.github.com/users/casper-hansen/followers",
"following_url": "https://api.github.com/users/casper-hansen/following{/other_user}",
"gists_url": "https://api.github.com/users/casper-hansen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/casper-hansen",
"id": 27340033,
"login": "casper-hansen",
"node_id": "MDQ6VXNlcjI3MzQwMDMz",
"organizations_url": "https://api.github.com/users/casper-hansen/orgs",
"received_events_url": "https://api.github.com/users/casper-hansen/received_events",
"repos_url": "https://api.github.com/users/casper-hansen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/casper-hansen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/casper-hansen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/casper-hansen"
} | [] | null | completed | NONE | 2024-03-04T13:46:20Z | null | I_kwDODunzps6A8JWe | [
"`save_to_disk` uses a special serialization that can only be read using `load_from_disk`.\r\n\r\nContrary to `load_dataset`, `load_from_disk` directly loads Arrow files and uses the dataset directory as cache.\r\n\r\nOn the other hand `load_dataset` does a conversion step to get Arrow files from the raw data files (could be in JSON, CSV, Parquet etc.) and caches them in the `datasets` cache directory (default is `~/.cache/huggingface/datasets`). We haven't implemented any logic in `load_dataset` to support datasets saved with `save_to_disk` because they don't use the same cache.\r\n\r\nEDIT: note that you can save your dataset in Parquet format locally using `.to.parquet()` (make sure to shard in multiple files your dataset if it's multiple GBs - you can use `.shard()` + `.to_parquet()` to do that) and you'll be able to reload it using `load_dataset`",
"@lhoestq, so is it correctly understood that if I run `to_parquet()` and then `save_to_disk()`, I can load it with `load_dataset`? If yes, then it would resolve this issue (and should probably be documented somewhere 😄)",
"Here is an example:\r\n```python\r\nds.to_parquet(\"my/local/dir/data.parquet\")\r\n\r\n# later\r\nds = load_dataset(\"my/local/dir\")\r\n```\r\n\r\nand for bigger datasets:\r\n```python\r\nnum_shards = 1024 # set number of files to save (e.g. try to have files smaller than 5GB)\r\nfor shard_idx in num_shards:\r\n shard = ds.shard(index=shard_idx, num_shards=num_shards)\r\n shard.to_parquet(f\"my/local/dir/{shard_idx:05d}.parquet\") # 00000.parquet to 01023.parquet\r\n\r\n# later\r\nds = load_dataset(\"my/local/dir\")\r\n```\r\n\r\n\r\nI hope this helps :)",
"Thanks for helping out! Does this approach work with `s3fs`? e.g. something like this:\r\n\r\n```python\r\nimport s3fs\r\ns3 = s3fs.S3FileSystem(anon=True)\r\nwith s3.open('mybucket/new-file.parquet', 'w') as f:\r\n ds.to_parquet(f)\r\n```\r\n\r\nThis is instead of `save_to_disk` to save to an S3 bucket.\r\n\r\nOtherwise, I am not sure how to make this work when saving the dataset to an S3 bucket. Would `dataset.set_format(\"arrow\")` work as a replacement?",
"`load_dataset` does't support S3 buckets unfortunately :/",
"> `load_dataset` does't support S3 buckets unfortunately :/\r\n\r\nI am aware but I have some code that downloads it to disk before using that method. The most important part is to store it in a format that load_dataset is compatible with. ",
"Feel free to use Parquet then :)",
"I ended up with this. Not ideal to save to local disk, but it works and loads via `load_datasets` after downloading from S3 with another method.\r\n\r\n```python\r\nwith tempfile.TemporaryDirectory() as dir:\r\n dataset_nbytes = ds._estimate_nbytes()\r\n max_shard_size_local = convert_file_size_to_int(max_shard_size)\r\n num_shards = int(dataset_nbytes / max_shard_size_local) + 1\r\n\r\n for shard_idx in range(num_shards):\r\n shard = ds.shard(index=shard_idx, num_shards=num_shards)\r\n shard.to_parquet(f\"{dir}/{shard_idx:05d}.parquet\")\r\n \r\n fs.upload(\r\n lpath=dir,\r\n rpath=s3_path,\r\n recursive=True,\r\n )\r\n```"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6703/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6703 | https://github.com/huggingface/datasets/issues/6703 | false |
2,161,938,484 | https://api.github.com/repos/huggingface/datasets/issues/6702/labels{/name} | ### Feature request
Say I have the following code:
```
from datasets import Dataset
import pandas as pd
new_data = {
"column_1": ["value1", "value2"],
"column_2": ["value3", "value4"],
}
df_new = pd.DataFrame(new_data)
dataset_new = Dataset.from_pandas(df_new)
# add these samples to a remote dataset
```
It would be great to have a way to push dataset_new to a remote dataset that respects the same schema. This way one would not have to do the following:
```
from datasets import load_dataset
dataset = load_dataset('username/dataset_name', use_auth_token='your_hf_token_here')
updated_dataset = dataset['train'].concatenate(dataset_new)
updated_dataset.push_to_hub('username/dataset_name', use_auth_token='your_hf_token_here')
```
### Motivation
No need to download the dataset.
### Your contribution
Maybe this feature already exists, didnt see it though. I do not have the expertise to do this. | 2024-03-08T21:08:38Z | 6,702 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-02-29T19:17:12Z | https://api.github.com/repos/huggingface/datasets/issues/6702/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6702/timeline | Push samples to dataset on hub without having the dataset locally | https://api.github.com/repos/huggingface/datasets/issues/6702/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/17854096?v=4",
"events_url": "https://api.github.com/users/jbdel/events{/privacy}",
"followers_url": "https://api.github.com/users/jbdel/followers",
"following_url": "https://api.github.com/users/jbdel/following{/other_user}",
"gists_url": "https://api.github.com/users/jbdel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jbdel",
"id": 17854096,
"login": "jbdel",
"node_id": "MDQ6VXNlcjE3ODU0MDk2",
"organizations_url": "https://api.github.com/users/jbdel/orgs",
"received_events_url": "https://api.github.com/users/jbdel/received_events",
"repos_url": "https://api.github.com/users/jbdel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jbdel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbdel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jbdel"
} | [] | null | completed | NONE | 2024-03-08T21:08:38Z | null | I_kwDODunzps6A3JA0 | [
"Hi ! For now I would recommend creating a new Parquet file using `dataset_new.to_parquet()` and upload it to HF using `huggingface_hub` every time you get a new batch of data. You can name the Parquet files `0000.parquet`, `0001.parquet`, etc.\r\n\r\nThough maybe make sure to not upload one file per sample since that would be inefficient. You can buffer your data and upload when you have enough new samples for example",
"This is excellent, thanks!"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6702/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6702 | https://github.com/huggingface/datasets/issues/6702 | false |
2,161,448,017 | https://api.github.com/repos/huggingface/datasets/issues/6701/labels{/name} | This allows to stream datasets like [Major-TOM/Core-S2L2A](https://huggingface.co/datasets/Major-TOM/Core-S2L2A) which have row groups with few rows (one row is ~10MB). Previously the cold start would take a lot of time and OOM because it would download many row groups before yielding the first example.
I tried on OpenOrca and imagenet-hard and it does't affect overall throughput.
Even if the overall throughput doesn't change for datasets like imagenet-hard with big rows, note that it does create shorter and more frequent pauses to download the next row group. Though I find it fine because previously the pauses were less frequent but very long (downloading multiple row groups at a time) | 2024-02-29T15:15:18Z | 6,701 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-29T14:53:01Z | https://api.github.com/repos/huggingface/datasets/issues/6701/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6701/timeline | Base parquet batch_size on parquet row group size | https://api.github.com/repos/huggingface/datasets/issues/6701/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2024-02-29T15:08:55Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6701.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6701",
"merged_at": "2024-02-29T15:08:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6701.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6701"
} | PR_kwDODunzps5oTfO_ | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6701). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005490 / 0.011353 (-0.005863) | 0.003709 / 0.011008 (-0.007299) | 0.064192 / 0.038508 (0.025684) | 0.029581 / 0.023109 (0.006472) | 0.251086 / 0.275898 (-0.024812) | 0.267306 / 0.323480 (-0.056174) | 0.003074 / 0.007986 (-0.004912) | 0.003340 / 0.004328 (-0.000988) | 0.048820 / 0.004250 (0.044569) | 0.045370 / 0.037052 (0.008318) | 0.260384 / 0.258489 (0.001895) | 0.284558 / 0.293841 (-0.009283) | 0.027732 / 0.128546 (-0.100814) | 0.010661 / 0.075646 (-0.064986) | 0.213403 / 0.419271 (-0.205868) | 0.036283 / 0.043533 (-0.007250) | 0.250107 / 0.255139 (-0.005032) | 0.265220 / 0.283200 (-0.017980) | 0.021021 / 0.141683 (-0.120661) | 1.112058 / 1.452155 (-0.340096) | 1.169039 / 1.492716 (-0.323678) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095008 / 0.018006 (0.077002) | 0.303509 / 0.000490 (0.303019) | 0.000233 / 0.000200 (0.000033) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018224 / 0.037411 (-0.019187) | 0.061366 / 0.014526 (0.046841) | 0.073584 / 0.176557 (-0.102972) | 0.119869 / 0.737135 (-0.617266) | 0.074228 / 0.296338 (-0.222111) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288147 / 0.215209 (0.072938) | 2.824419 / 2.077655 (0.746764) | 1.478530 / 1.504120 (-0.025590) | 1.350127 / 1.541195 (-0.191067) | 1.349622 / 1.468490 (-0.118868) | 0.568058 / 4.584777 (-4.016719) | 2.377494 / 3.745712 (-1.368218) | 2.720767 / 5.269862 (-2.549094) | 1.710763 / 4.565676 (-2.854914) | 0.061498 / 0.424275 (-0.362778) | 0.004893 / 0.007607 (-0.002715) | 0.335633 / 0.226044 (0.109588) | 3.380646 / 2.268929 (1.111717) | 1.802436 / 55.444624 (-53.642188) | 1.562737 / 6.876477 (-5.313739) | 1.566267 / 2.142072 (-0.575806) | 0.629058 / 4.805227 (-4.176169) | 0.116307 / 6.500664 (-6.384357) | 0.042174 / 0.075469 (-0.033295) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.950945 / 1.841788 (-0.890842) | 11.279009 / 8.074308 (3.204701) | 9.433251 / 10.191392 (-0.758141) | 0.138964 / 0.680424 (-0.541460) | 0.014155 / 0.534201 (-0.520046) | 0.284065 / 0.579283 (-0.295218) | 0.263301 / 0.434364 (-0.171063) | 0.331932 / 0.540337 (-0.208406) | 0.441656 / 1.386936 (-0.945280) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005132 / 0.011353 (-0.006221) | 0.003484 / 0.011008 (-0.007524) | 0.049040 / 0.038508 (0.010532) | 0.030254 / 0.023109 (0.007145) | 0.277141 / 0.275898 (0.001243) | 0.295242 / 0.323480 (-0.028238) | 0.004295 / 0.007986 (-0.003690) | 0.002632 / 0.004328 (-0.001696) | 0.048540 / 0.004250 (0.044290) | 0.044787 / 0.037052 (0.007734) | 0.287736 / 0.258489 (0.029247) | 0.313146 / 0.293841 (0.019305) | 0.029340 / 0.128546 (-0.099206) | 0.010204 / 0.075646 (-0.065442) | 0.059058 / 0.419271 (-0.360214) | 0.051033 / 0.043533 (0.007500) | 0.274086 / 0.255139 (0.018947) | 0.293048 / 0.283200 (0.009848) | 0.019573 / 0.141683 (-0.122110) | 1.174032 / 1.452155 (-0.278123) | 1.227107 / 1.492716 (-0.265609) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094896 / 0.018006 (0.076890) | 0.303519 / 0.000490 (0.303029) | 0.000223 / 0.000200 (0.000023) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021495 / 0.037411 (-0.015917) | 0.074234 / 0.014526 (0.059708) | 0.086212 / 0.176557 (-0.090345) | 0.125052 / 0.737135 (-0.612084) | 0.087464 / 0.296338 (-0.208874) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297098 / 0.215209 (0.081889) | 2.970944 / 2.077655 (0.893289) | 1.650101 / 1.504120 (0.145981) | 1.532694 / 1.541195 (-0.008501) | 1.513652 / 1.468490 (0.045162) | 0.559614 / 4.584777 (-4.025163) | 2.404848 / 3.745712 (-1.340865) | 2.627851 / 5.269862 (-2.642011) | 1.707550 / 4.565676 (-2.858127) | 0.061821 / 0.424275 (-0.362454) | 0.005012 / 0.007607 (-0.002595) | 0.342462 / 0.226044 (0.116417) | 3.401703 / 2.268929 (1.132774) | 1.991632 / 55.444624 (-53.452993) | 1.737706 / 6.876477 (-5.138771) | 1.837457 / 2.142072 (-0.304616) | 0.638845 / 4.805227 (-4.166383) | 0.114773 / 6.500664 (-6.385891) | 0.040175 / 0.075469 (-0.035294) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.038286 / 1.841788 (-0.803501) | 11.885757 / 8.074308 (3.811448) | 10.061530 / 10.191392 (-0.129862) | 0.140824 / 0.680424 (-0.539600) | 0.015080 / 0.534201 (-0.519121) | 0.287992 / 0.579283 (-0.291291) | 0.273498 / 0.434364 (-0.160866) | 0.326478 / 0.540337 (-0.213860) | 0.426900 / 1.386936 (-0.960036) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b02be21047087c5ffc11cf1c072a5aceab517eba \"CML watermark\")\n"
] | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6701/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6701 | https://github.com/huggingface/datasets/pull/6701 | true |
2,158,871,038 | https://api.github.com/repos/huggingface/datasets/issues/6700/labels{/name} | ### Describe the bug
The doc of `datasets` v2.17.0/v2.17.1 shows that `remove_columns` is in-place. [link](https://huggingface.co/docs/datasets/v2.17.1/en/package_reference/main_classes#datasets.DatasetDict.remove_columns)
In the text classification example of transformers v4.38.1, the columns are not removed.
https://github.com/huggingface/transformers/blob/a0857740c0e6127485c11476650314df3accc2b6/examples/pytorch/text-classification/run_classification.py#L421
### Steps to reproduce the bug
https://github.com/huggingface/transformers/blob/a0857740c0e6127485c11476650314df3accc2b6/examples/pytorch/text-classification/run_classification.py#L421
### Expected behavior
Actually remove the columns.
### Environment info
1. datasets v2.17.0
2. transformers v4.38.1 | 2024-04-02T17:15:28Z | 6,700 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-02-28T12:36:22Z | https://api.github.com/repos/huggingface/datasets/issues/6700/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6700/timeline | remove_columns is not in-place but the doc shows it is in-place | https://api.github.com/repos/huggingface/datasets/issues/6700/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/32047804?v=4",
"events_url": "https://api.github.com/users/shelfofclub/events{/privacy}",
"followers_url": "https://api.github.com/users/shelfofclub/followers",
"following_url": "https://api.github.com/users/shelfofclub/following{/other_user}",
"gists_url": "https://api.github.com/users/shelfofclub/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shelfofclub",
"id": 32047804,
"login": "shelfofclub",
"node_id": "MDQ6VXNlcjMyMDQ3ODA0",
"organizations_url": "https://api.github.com/users/shelfofclub/orgs",
"received_events_url": "https://api.github.com/users/shelfofclub/received_events",
"repos_url": "https://api.github.com/users/shelfofclub/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shelfofclub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shelfofclub/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shelfofclub"
} | [] | null | completed | NONE | 2024-04-02T17:15:28Z | null | I_kwDODunzps6ArcH- | [
"Good catch! I've opened a PR with a fix in the `transformers` repo.",
"@mariosasko Thanks!\r\n\r\nWill the doc of `datasets` be updated?\r\n\r\nI find some possible mistakes in doc about whether `remove_columns` is in-place.\r\n1. [You can also remove a column using map() with remove_columns but the present method is in-place (doesn’t copy the data to a new dataset) and is thus faster.](https://huggingface.co/docs/datasets/v2.17.1/en/package_reference/main_classes#datasets.Dataset.remove_columns)\r\n2. [You can also remove a column using Dataset.map() with remove_columns but the present method is in-place (doesn’t copy the data to a new dataset) and is thus faster.](https://huggingface.co/docs/datasets/v2.17.1/en/package_reference/main_classes#datasets.DatasetDict.remove_columns)\r\n3. [🤗 Datasets also has a remove_columns() function which is faster because it doesn’t copy the data of the remaining columns.](https://huggingface.co/docs/datasets/v2.17.1/en/process#map)",
"I've linked a PR that will fix the usage in the `datasets` docs."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6700/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6700 | https://github.com/huggingface/datasets/issues/6700 | false |
2,158,152,341 | https://api.github.com/repos/huggingface/datasets/issues/6699/labels{/name} | ### Describe the bug
Will unexpected get keys with `None` value in the parsed json dict.
### Steps to reproduce the bug
```jsonl test.jsonl
{"id": 0, "indexs": {"-1": [0, 10]}}
{"id": 1, "indexs": {"-1": [0, 10]}}
```
```python
dataset = Dataset.from_json('.test.jsonl')
print(dataset[0])
```
Result:
```
{'id': 0, 'indexs': {'-1': [...], '-2': None, '-3': None, '-4': None, '-5': None, '-6': None, '-7': None, '-8': None, '-9': None, ...}}
```
Those keys with `None` value will unexpected appear in the dict.
### Expected behavior
Result should be
```
{'id': 0, 'indexs': {'-1': [0, 10]}}
```
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-6.5.0-14-generic-x86_64-with-glibc2.35
- Python version: 3.11.6
- `huggingface_hub` version: 0.20.2
- PyArrow version: 14.0.2
- Pandas version: 2.1.4
- `fsspec` version: 2023.10.0
| 2024-02-28T19:14:36Z | 6,699 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-02-28T05:30:10Z | https://api.github.com/repos/huggingface/datasets/issues/6699/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6699/timeline | `Dataset` unexpected changed dict data and may cause error | https://api.github.com/repos/huggingface/datasets/issues/6699/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/16933298?v=4",
"events_url": "https://api.github.com/users/scruel/events{/privacy}",
"followers_url": "https://api.github.com/users/scruel/followers",
"following_url": "https://api.github.com/users/scruel/following{/other_user}",
"gists_url": "https://api.github.com/users/scruel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/scruel",
"id": 16933298,
"login": "scruel",
"node_id": "MDQ6VXNlcjE2OTMzMjk4",
"organizations_url": "https://api.github.com/users/scruel/orgs",
"received_events_url": "https://api.github.com/users/scruel/received_events",
"repos_url": "https://api.github.com/users/scruel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/scruel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scruel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/scruel"
} | [] | null | null | NONE | null | null | I_kwDODunzps6AosqV | [
"If `test.jsonl` contains more lines like:\r\n```\r\n{\"id\": 0, \"indexs\": {\"-1\": [0, 10]}}\r\n{\"id\": 1, \"indexs\": {\"-1\": [0, 10]}}\r\n{\"id\": 2, \"indexs\": {\"-2\": [0, 10]}}\r\n...\r\n{\"id\": n, \"indexs\": {\"-9999\": [0, 10]}}\r\n```\r\n\r\n`Dataset.from_json` will just raise an error:\r\n```\r\nAn error occurred while generating the dataset\r\nTypeError: Couldn't cast array of type\r\nstruct<-5942: list<item: int64>, -5943: list<item: int64>, -5944: list<item: int64>, -5945: list<item: int64>, -5946: list<item: int64>, -5947: list<item: int64>, -5948: list<item: int64>, -5949: list<item: int64>: ...\r\nto\r\n{... '-5312': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), '-5313': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)}\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/runpy.py\", line 198, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/runpy.py\", line 88, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py\", line 39, in <module>\r\n cli.main()\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py\", line 430, in main\r\n run()\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py\", line 284, in run_file\r\n runpy.run_path(target, run_name=\"__main__\")\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 321, in run_path\r\n return _run_module_code(code, init_globals, run_name,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 135, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 124, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/scruel/Code/Python/Working/llm-memory/data_reader.py\", line 120, in <module>\r\n reader = SnippetReader(jsonl_path, npy_path)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/scruel/Code/Python/Working/llm-memory/data_reader.py\", line 85, in __init__\r\n self._dataset = Dataset.from_json(jsonl_path, features=)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/arrow_dataset.py\", line 1130, in from_json\r\n ).read()\r\n ^^^^^^\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/io/json.py\", line 59, in read\r\n self.builder.download_and_prepare(\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/builder.py\", line 1005, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/builder.py\", line 1100, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/builder.py\", line 1860, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/builder.py\", line 2016, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\ndatasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset\r\n```",
"Hi! Our JSON parser expects all examples/rows to share the same set of columns (applies to nested columns, too), hence the error. \r\n\r\nTo read the `index` column, we would have to manually cast the input to PyArrow's `pa.map_` type, but this requires a more thorough investigation, as `pa.map_` has limited support in PyArrow."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6699/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6699 | https://github.com/huggingface/datasets/issues/6699 | false |
2,157,752,392 | https://api.github.com/repos/huggingface/datasets/issues/6698/labels{/name} | Pass `detail=False` to the `fsspec` `listdir` to avoid unnecessarily fetching expensive metadata about the paths. | 2024-02-27T23:44:49Z | 6,698 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-27T22:55:08Z | https://api.github.com/repos/huggingface/datasets/issues/6698/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6698/timeline | Faster `xlistdir` | https://api.github.com/repos/huggingface/datasets/issues/6698/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2024-02-27T23:38:14Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6698.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6698",
"merged_at": "2024-02-27T23:38:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6698.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6698"
} | PR_kwDODunzps5oG6Xt | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6698). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"CI failure is unrelated to the changes.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005499 / 0.011353 (-0.005854) | 0.003824 / 0.011008 (-0.007184) | 0.064230 / 0.038508 (0.025722) | 0.028962 / 0.023109 (0.005853) | 0.283540 / 0.275898 (0.007642) | 0.300774 / 0.323480 (-0.022706) | 0.003405 / 0.007986 (-0.004581) | 0.002796 / 0.004328 (-0.001532) | 0.049834 / 0.004250 (0.045584) | 0.045924 / 0.037052 (0.008872) | 0.274818 / 0.258489 (0.016328) | 0.306189 / 0.293841 (0.012348) | 0.028304 / 0.128546 (-0.100242) | 0.011496 / 0.075646 (-0.064150) | 0.208236 / 0.419271 (-0.211036) | 0.035720 / 0.043533 (-0.007813) | 0.261190 / 0.255139 (0.006051) | 0.281545 / 0.283200 (-0.001655) | 0.019388 / 0.141683 (-0.122295) | 1.134999 / 1.452155 (-0.317156) | 1.203053 / 1.492716 (-0.289663) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096007 / 0.018006 (0.078000) | 0.316958 / 0.000490 (0.316469) | 0.000210 / 0.000200 (0.000010) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018330 / 0.037411 (-0.019081) | 0.063299 / 0.014526 (0.048773) | 0.073833 / 0.176557 (-0.102723) | 0.122285 / 0.737135 (-0.614850) | 0.077352 / 0.296338 (-0.218987) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.304487 / 0.215209 (0.089278) | 3.017666 / 2.077655 (0.940012) | 1.664292 / 1.504120 (0.160172) | 1.448446 / 1.541195 (-0.092748) | 1.435612 / 1.468490 (-0.032878) | 0.569704 / 4.584777 (-4.015073) | 2.362015 / 3.745712 (-1.383698) | 2.910380 / 5.269862 (-2.359481) | 1.814560 / 4.565676 (-2.751116) | 0.063986 / 0.424275 (-0.360289) | 0.005022 / 0.007607 (-0.002585) | 0.363528 / 0.226044 (0.137483) | 3.641940 / 2.268929 (1.373011) | 1.961589 / 55.444624 (-53.483035) | 1.603683 / 6.876477 (-5.272793) | 1.663144 / 2.142072 (-0.478928) | 0.645628 / 4.805227 (-4.159599) | 0.118759 / 6.500664 (-6.381905) | 0.042631 / 0.075469 (-0.032838) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985648 / 1.841788 (-0.856140) | 13.082558 / 8.074308 (5.008250) | 9.909811 / 10.191392 (-0.281581) | 0.131340 / 0.680424 (-0.549083) | 0.013983 / 0.534201 (-0.520218) | 0.289869 / 0.579283 (-0.289414) | 0.271775 / 0.434364 (-0.162589) | 0.334853 / 0.540337 (-0.205485) | 0.457017 / 1.386936 (-0.929919) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005580 / 0.011353 (-0.005773) | 0.003788 / 0.011008 (-0.007221) | 0.049401 / 0.038508 (0.010893) | 0.030372 / 0.023109 (0.007263) | 0.278554 / 0.275898 (0.002655) | 0.302462 / 0.323480 (-0.021018) | 0.004412 / 0.007986 (-0.003573) | 0.002825 / 0.004328 (-0.001504) | 0.047826 / 0.004250 (0.043576) | 0.047903 / 0.037052 (0.010851) | 0.293098 / 0.258489 (0.034609) | 0.322777 / 0.293841 (0.028936) | 0.030010 / 0.128546 (-0.098536) | 0.011187 / 0.075646 (-0.064459) | 0.057639 / 0.419271 (-0.361632) | 0.059693 / 0.043533 (0.016160) | 0.280288 / 0.255139 (0.025149) | 0.294022 / 0.283200 (0.010823) | 0.019635 / 0.141683 (-0.122048) | 1.154733 / 1.452155 (-0.297422) | 1.200808 / 1.492716 (-0.291908) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099682 / 0.018006 (0.081676) | 0.319521 / 0.000490 (0.319031) | 0.000224 / 0.000200 (0.000024) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022042 / 0.037411 (-0.015370) | 0.078842 / 0.014526 (0.064317) | 0.088715 / 0.176557 (-0.087841) | 0.126832 / 0.737135 (-0.610303) | 0.089217 / 0.296338 (-0.207122) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300099 / 0.215209 (0.084890) | 2.907746 / 2.077655 (0.830092) | 1.619418 / 1.504120 (0.115298) | 1.495693 / 1.541195 (-0.045501) | 1.544956 / 1.468490 (0.076466) | 0.556652 / 4.584777 (-4.028124) | 2.414408 / 3.745712 (-1.331304) | 2.737227 / 5.269862 (-2.532635) | 1.763187 / 4.565676 (-2.802490) | 0.062207 / 0.424275 (-0.362069) | 0.005076 / 0.007607 (-0.002531) | 0.349880 / 0.226044 (0.123836) | 3.425355 / 2.268929 (1.156427) | 1.972094 / 55.444624 (-53.472531) | 1.710650 / 6.876477 (-5.165827) | 1.902218 / 2.142072 (-0.239855) | 0.640699 / 4.805227 (-4.164529) | 0.117879 / 6.500664 (-6.382785) | 0.042412 / 0.075469 (-0.033057) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.030131 / 1.841788 (-0.811656) | 12.750637 / 8.074308 (4.676329) | 10.352636 / 10.191392 (0.161244) | 0.141139 / 0.680424 (-0.539285) | 0.015343 / 0.534201 (-0.518858) | 0.294931 / 0.579283 (-0.284352) | 0.275237 / 0.434364 (-0.159127) | 0.336669 / 0.540337 (-0.203668) | 0.429945 / 1.386936 (-0.956991) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9c424fa517a1b8517c89545f979e0c8c7d90c3e3 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6698/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6698 | https://github.com/huggingface/datasets/pull/6698 | true |
2,157,322,224 | https://api.github.com/repos/huggingface/datasets/issues/6697/labels{/name} | ### Describe the bug
Having installed the latest versions of transformers==4.38.1 and datasets==2.17.1 Unable to load the dataset in a kaggle notebook.
Get this Error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[8], line 3
1 from datasets import load_dataset
----> 3 dataset = load_dataset("llm-blender/mix-instruct")
File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1664, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1661 ignore_verifications = ignore_verifications or save_infos
1663 # Create a dataset builder
-> 1664 builder_instance = load_dataset_builder(
1665 path=path,
1666 name=name,
1667 data_dir=data_dir,
1668 data_files=data_files,
1669 cache_dir=cache_dir,
1670 features=features,
1671 download_config=download_config,
1672 download_mode=download_mode,
1673 revision=revision,
1674 use_auth_token=use_auth_token,
1675 **config_kwargs,
1676 )
1678 # Return iterable dataset in case of streaming
1679 if streaming:
File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1490, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)
1488 download_config = download_config.copy() if download_config else DownloadConfig()
1489 download_config.use_auth_token = use_auth_token
-> 1490 dataset_module = dataset_module_factory(
1491 path,
1492 revision=revision,
1493 download_config=download_config,
1494 download_mode=download_mode,
1495 data_dir=data_dir,
1496 data_files=data_files,
1497 )
1499 # Get dataset builder class from the processing script
1500 builder_cls = import_main_class(dataset_module.module_path)
File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1242, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1237 if isinstance(e1, FileNotFoundError):
1238 raise FileNotFoundError(
1239 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1240 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
1241 ) from None
-> 1242 raise e1 from None
1243 else:
1244 raise FileNotFoundError(
1245 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory."
1246 )
File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1230, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1215 return HubDatasetModuleFactoryWithScript(
1216 path,
1217 revision=revision,
(...)
1220 dynamic_modules_path=dynamic_modules_path,
1221 ).get_module()
1222 else:
1223 return HubDatasetModuleFactoryWithoutScript(
1224 path,
1225 revision=revision,
1226 data_dir=data_dir,
1227 data_files=data_files,
1228 download_config=download_config,
1229 download_mode=download_mode,
-> 1230 ).get_module()
1231 except Exception as e1: # noqa: all the attempts failed, before raising the error we should check if the module is already cached.
1232 try:
File /opt/conda/lib/python3.10/site-packages/datasets/load.py:846, in HubDatasetModuleFactoryWithoutScript.get_module(self)
836 token = self.download_config.use_auth_token
837 hfh_dataset_info = HfApi(config.HF_ENDPOINT).dataset_info(
838 self.name,
839 revision=self.revision,
840 token=token,
841 timeout=100.0,
842 )
843 patterns = (
844 sanitize_patterns(self.data_files)
845 if self.data_files is not None
--> 846 else get_patterns_in_dataset_repository(hfh_dataset_info)
847 )
848 data_files = DataFilesDict.from_hf_repo(
849 patterns,
850 dataset_info=hfh_dataset_info,
851 allowed_extensions=ALL_ALLOWED_EXTENSIONS,
852 )
853 infered_module_names = {
854 key: infer_module_for_data_files(data_files_list, use_auth_token=self.download_config.use_auth_token)
855 for key, data_files_list in data_files.items()
856 }
File /opt/conda/lib/python3.10/site-packages/datasets/data_files.py:471, in get_patterns_in_dataset_repository(dataset_info)
469 resolver = partial(_resolve_single_pattern_in_dataset_repository, dataset_info)
470 try:
--> 471 return _get_data_files_patterns(resolver)
472 except FileNotFoundError:
473 raise FileNotFoundError(
474 f"The dataset repository at '{dataset_info.id}' doesn't contain any data file."
475 ) from None
File /opt/conda/lib/python3.10/site-packages/datasets/data_files.py:99, in _get_data_files_patterns(pattern_resolver)
97 try:
98 for pattern in patterns:
---> 99 data_files = pattern_resolver(pattern)
100 if len(data_files) > 0:
101 non_empty_splits.append(split)
File /opt/conda/lib/python3.10/site-packages/datasets/data_files.py:303, in _resolve_single_pattern_in_dataset_repository(dataset_info, pattern, allowed_extensions)
301 data_files_ignore = FILES_TO_IGNORE
302 fs = HfFileSystem(repo_info=dataset_info)
--> 303 glob_iter = [PurePath(filepath) for filepath in fs.glob(PurePath(pattern).as_posix()) if fs.isfile(filepath)]
304 matched_paths = [
305 filepath
306 for filepath in glob_iter
307 if filepath.name not in data_files_ignore and not filepath.name.startswith(".")
308 ]
309 if allowed_extensions is not None:
File /opt/conda/lib/python3.10/site-packages/fsspec/spec.py:606, in AbstractFileSystem.glob(self, path, maxdepth, **kwargs)
602 depth = None
604 allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs)
--> 606 pattern = glob_translate(path + ("/" if ends_with_sep else ""))
607 pattern = re.compile(pattern)
609 out = {
610 p: info
611 for p, info in sorted(allpaths.items())
(...)
618 )
619 }
File /opt/conda/lib/python3.10/site-packages/fsspec/utils.py:734, in glob_translate(pat)
732 continue
733 elif "**" in part:
--> 734 raise ValueError(
735 "Invalid pattern: '**' can only be an entire path component"
736 )
737 if part:
738 results.extend(_translate(part, f"{not_sep}*", not_sep))
ValueError: Invalid pattern: '**' can only be an entire path component
```
```
After loading this dataset
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("llm-blender/mix-instruct")
```
### Expected behavior
The dataset should load with desired split.
### Environment info
- `datasets` version: 2.17.1
- Platform: Linux-5.15.133+-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0
| 2024-02-29T17:32:42Z | 6,697 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-02-27T18:19:34Z | https://api.github.com/repos/huggingface/datasets/issues/6697/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6697/timeline | Unable to Load Dataset in Kaggle | https://api.github.com/repos/huggingface/datasets/issues/6697/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/97465624?v=4",
"events_url": "https://api.github.com/users/vrunm/events{/privacy}",
"followers_url": "https://api.github.com/users/vrunm/followers",
"following_url": "https://api.github.com/users/vrunm/following{/other_user}",
"gists_url": "https://api.github.com/users/vrunm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vrunm",
"id": 97465624,
"login": "vrunm",
"node_id": "U_kgDOBc81GA",
"organizations_url": "https://api.github.com/users/vrunm/orgs",
"received_events_url": "https://api.github.com/users/vrunm/received_events",
"repos_url": "https://api.github.com/users/vrunm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vrunm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vrunm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vrunm"
} | [] | null | completed | NONE | 2024-02-29T17:32:41Z | null | I_kwDODunzps6Alh_w | [
"FWIW, I run `load_dataset(\"llm-blender/mix-instruct\")` and it ran successfully.\r\nCan you clear your cache and try again?\r\n\r\n\r\n### Environment Info\r\n\r\n- `datasets` version: 2.17.0\r\n- Platform: Linux-6.2.6-76060206-generic-x86_64-with-glibc2.35\r\n- Python version: 3.9.13\r\n- `huggingface_hub` version: 0.20.3\r\n- PyArrow version: 15.0.0\r\n- Pandas version: 1.5.3\r\n- `fsspec` version: 2023.10.0",
"It is working on the Kaggle GPU instance but gives this same error when running on the CPU instance. Still to run it on Kaggle you require to install the latest versions of datasets and transformers.",
"This error means that `fsspec>=2023.12.0` is installed, which is incompatible with the current releases (the next `datasets` release will be the first to support it). In the meantime, downgrading `fsspec` (`pip install fsspec<=2023.12.0`) should fix the issue.",
"@mariosasko Thanks I got it to work with installing that version of fsspec."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6697/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6697 | https://github.com/huggingface/datasets/issues/6697 | false |
2,154,161,357 | https://api.github.com/repos/huggingface/datasets/issues/6696/labels{/name} | Support JSON file with an array of strings.
Fix #6695. | 2024-02-28T06:45:23Z | 6,696 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-26T13:18:31Z | https://api.github.com/repos/huggingface/datasets/issues/6696/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6696/timeline | Make JSON builder support an array of strings | https://api.github.com/repos/huggingface/datasets/issues/6696/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | null | MEMBER | 2024-02-28T06:39:12Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6696.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6696",
"merged_at": "2024-02-28T06:39:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6696.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6696"
} | PR_kwDODunzps5n6ipH | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6696). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005057 / 0.011353 (-0.006296) | 0.003665 / 0.011008 (-0.007343) | 0.063217 / 0.038508 (0.024709) | 0.028789 / 0.023109 (0.005679) | 0.233597 / 0.275898 (-0.042301) | 0.254792 / 0.323480 (-0.068687) | 0.003065 / 0.007986 (-0.004921) | 0.002686 / 0.004328 (-0.001642) | 0.050182 / 0.004250 (0.045932) | 0.042204 / 0.037052 (0.005151) | 0.254262 / 0.258489 (-0.004227) | 0.277099 / 0.293841 (-0.016742) | 0.027564 / 0.128546 (-0.100982) | 0.010768 / 0.075646 (-0.064878) | 0.207302 / 0.419271 (-0.211969) | 0.035737 / 0.043533 (-0.007796) | 0.242388 / 0.255139 (-0.012751) | 0.259833 / 0.283200 (-0.023367) | 0.019833 / 0.141683 (-0.121850) | 1.135928 / 1.452155 (-0.316227) | 1.162851 / 1.492716 (-0.329865) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089209 / 0.018006 (0.071202) | 0.300493 / 0.000490 (0.300003) | 0.000216 / 0.000200 (0.000016) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017968 / 0.037411 (-0.019444) | 0.061773 / 0.014526 (0.047247) | 0.073835 / 0.176557 (-0.102722) | 0.118592 / 0.737135 (-0.618544) | 0.073606 / 0.296338 (-0.222732) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287858 / 0.215209 (0.072649) | 2.822917 / 2.077655 (0.745262) | 1.485259 / 1.504120 (-0.018861) | 1.355922 / 1.541195 (-0.185273) | 1.364008 / 1.468490 (-0.104482) | 0.557713 / 4.584777 (-4.027064) | 2.378972 / 3.745712 (-1.366741) | 2.737218 / 5.269862 (-2.532643) | 1.718317 / 4.565676 (-2.847359) | 0.062362 / 0.424275 (-0.361913) | 0.004992 / 0.007607 (-0.002615) | 0.350765 / 0.226044 (0.124721) | 3.387579 / 2.268929 (1.118650) | 1.860408 / 55.444624 (-53.584216) | 1.569355 / 6.876477 (-5.307122) | 1.593013 / 2.142072 (-0.549059) | 0.639325 / 4.805227 (-4.165902) | 0.121769 / 6.500664 (-6.378895) | 0.042148 / 0.075469 (-0.033322) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.990594 / 1.841788 (-0.851194) | 11.460904 / 8.074308 (3.386596) | 9.438691 / 10.191392 (-0.752701) | 0.141884 / 0.680424 (-0.538540) | 0.013725 / 0.534201 (-0.520476) | 0.288847 / 0.579283 (-0.290436) | 0.278815 / 0.434364 (-0.155549) | 0.337108 / 0.540337 (-0.203229) | 0.441659 / 1.386936 (-0.945277) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005265 / 0.011353 (-0.006088) | 0.003734 / 0.011008 (-0.007274) | 0.049365 / 0.038508 (0.010857) | 0.030483 / 0.023109 (0.007373) | 0.275085 / 0.275898 (-0.000813) | 0.296004 / 0.323480 (-0.027475) | 0.004964 / 0.007986 (-0.003022) | 0.002542 / 0.004328 (-0.001787) | 0.048734 / 0.004250 (0.044483) | 0.044098 / 0.037052 (0.007046) | 0.292517 / 0.258489 (0.034028) | 0.319992 / 0.293841 (0.026151) | 0.029552 / 0.128546 (-0.098994) | 0.010669 / 0.075646 (-0.064977) | 0.058887 / 0.419271 (-0.360385) | 0.051163 / 0.043533 (0.007630) | 0.277266 / 0.255139 (0.022127) | 0.295347 / 0.283200 (0.012147) | 0.018403 / 0.141683 (-0.123280) | 1.151979 / 1.452155 (-0.300176) | 1.204583 / 1.492716 (-0.288134) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091157 / 0.018006 (0.073151) | 0.300109 / 0.000490 (0.299619) | 0.000211 / 0.000200 (0.000011) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021521 / 0.037411 (-0.015890) | 0.074954 / 0.014526 (0.060428) | 0.087010 / 0.176557 (-0.089546) | 0.125853 / 0.737135 (-0.611282) | 0.087877 / 0.296338 (-0.208461) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297890 / 0.215209 (0.082681) | 2.912159 / 2.077655 (0.834504) | 1.619311 / 1.504120 (0.115192) | 1.501726 / 1.541195 (-0.039468) | 1.494143 / 1.468490 (0.025652) | 0.566744 / 4.584777 (-4.018033) | 2.497594 / 3.745712 (-1.248118) | 2.631403 / 5.269862 (-2.638459) | 1.727896 / 4.565676 (-2.837780) | 0.065937 / 0.424275 (-0.358339) | 0.005023 / 0.007607 (-0.002585) | 0.345747 / 0.226044 (0.119702) | 3.417615 / 2.268929 (1.148686) | 1.949970 / 55.444624 (-53.494654) | 1.680019 / 6.876477 (-5.196457) | 1.789879 / 2.142072 (-0.352193) | 0.648053 / 4.805227 (-4.157174) | 0.117408 / 6.500664 (-6.383256) | 0.040681 / 0.075469 (-0.034788) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.012535 / 1.841788 (-0.829252) | 11.935819 / 8.074308 (3.861511) | 10.241452 / 10.191392 (0.050060) | 0.130956 / 0.680424 (-0.549468) | 0.015396 / 0.534201 (-0.518805) | 0.289166 / 0.579283 (-0.290117) | 0.274149 / 0.434364 (-0.160215) | 0.325844 / 0.540337 (-0.214493) | 0.424919 / 1.386936 (-0.962017) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cb834d9c63ab8cb14725ae8e4fc2da8672892a6d \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6696/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6696 | https://github.com/huggingface/datasets/pull/6696 | true |
2,154,075,509 | https://api.github.com/repos/huggingface/datasets/issues/6695/labels{/name} | Support loading a dataset from a JSON file with an array of strings.
See: https://huggingface.co/datasets/CausalLM/Refined-Anime-Text/discussions/1 | 2024-03-08T14:16:25Z | 6,695 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-02-26T12:35:11Z | https://api.github.com/repos/huggingface/datasets/issues/6695/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/6695/timeline | Support JSON file with an array of strings | https://api.github.com/repos/huggingface/datasets/issues/6695/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | completed | MEMBER | 2024-02-28T06:39:13Z | null | I_kwDODunzps6AZJV1 | [
"https://huggingface.co/datasets/CausalLM/Refined-Anime-Text/discussions/1 has been fixed, but how can we check if there are other datasets with the same error, in datasets-server's database? I don't know how to get the list of erroneous cache entries, since we only copied `Error code: JobManagerCrashedError`, but not the traceback in `details`... Do you remember the error message, or the underlying exception, we had?"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6695/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6695 | https://github.com/huggingface/datasets/issues/6695 | false |
2,153,086,984 | https://api.github.com/repos/huggingface/datasets/issues/6694/labels{/name} | It's too cumbersome to write this command every time we perform a dataset merging operation. ```pythonfrom datasets import concatenate_datasets``` We have added a simple `__add__` magic method to each class using `concatenate_datasets.`
```python
from datasets import load_dataset
bookcorpus = load_dataset("bookcorpus", split="train")
wiki = load_dataset("wikimedia/wikipedia", "20231101.ab", split="train")
wiki = wiki.remove_columns([col for col in wiki.column_names if col != "text"]) # only keep the 'text' column
bookcorpus + wiki
#Dataset({
# features: ['text'],
# num_rows: 74004228
#})
#Dataset({
# features: ['text'],
# num_rows: 6152
#})
#Dataset({
# features: ['text'],
# num_rows: 74010380
#})
``` | 2024-02-29T16:52:58Z | 6,694 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-26T01:46:55Z | https://api.github.com/repos/huggingface/datasets/issues/6694/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6694/timeline | __add__ for Dataset, IterableDataset | https://api.github.com/repos/huggingface/datasets/issues/6694/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/79557937?v=4",
"events_url": "https://api.github.com/users/oh-gnues-iohc/events{/privacy}",
"followers_url": "https://api.github.com/users/oh-gnues-iohc/followers",
"following_url": "https://api.github.com/users/oh-gnues-iohc/following{/other_user}",
"gists_url": "https://api.github.com/users/oh-gnues-iohc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/oh-gnues-iohc",
"id": 79557937,
"login": "oh-gnues-iohc",
"node_id": "MDQ6VXNlcjc5NTU3OTM3",
"organizations_url": "https://api.github.com/users/oh-gnues-iohc/orgs",
"received_events_url": "https://api.github.com/users/oh-gnues-iohc/received_events",
"repos_url": "https://api.github.com/users/oh-gnues-iohc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/oh-gnues-iohc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oh-gnues-iohc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/oh-gnues-iohc"
} | [] | null | null | NONE | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6694.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6694",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6694.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6694"
} | PR_kwDODunzps5n23Jz | [
"Hi! You can find a reason why we are against this feature in https://github.com/huggingface/datasets/issues/3449. \r\n\r\n> It's too cumbersome to write this command every time we perform a dataset merging operation\r\n\r\nExplicit is better than implicit, so this isn't a good enough reason. \r\n\r\nThanks for the effort nonetheless :)!"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6694/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6694 | https://github.com/huggingface/datasets/pull/6694 | true |
2,152,887,712 | https://api.github.com/repos/huggingface/datasets/issues/6693/labels{/name} | Update documentation to align with `Dataset.__repr__` change after #423 | 2024-02-25T19:57:12Z | 6,693 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-25T18:37:07Z | https://api.github.com/repos/huggingface/datasets/issues/6693/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6693/timeline | Update the print message for chunked_dataset in process.mdx | https://api.github.com/repos/huggingface/datasets/issues/6693/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/142939562?v=4",
"events_url": "https://api.github.com/users/gzbfgjf2/events{/privacy}",
"followers_url": "https://api.github.com/users/gzbfgjf2/followers",
"following_url": "https://api.github.com/users/gzbfgjf2/following{/other_user}",
"gists_url": "https://api.github.com/users/gzbfgjf2/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gzbfgjf2",
"id": 142939562,
"login": "gzbfgjf2",
"node_id": "U_kgDOCIUVqg",
"organizations_url": "https://api.github.com/users/gzbfgjf2/orgs",
"received_events_url": "https://api.github.com/users/gzbfgjf2/received_events",
"repos_url": "https://api.github.com/users/gzbfgjf2/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gzbfgjf2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gzbfgjf2/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gzbfgjf2"
} | [] | null | null | CONTRIBUTOR | 2024-02-25T19:51:02Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6693.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6693",
"merged_at": "2024-02-25T19:51:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6693.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6693"
} | PR_kwDODunzps5n2ObO | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6693). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005069 / 0.011353 (-0.006284) | 0.003682 / 0.011008 (-0.007326) | 0.063733 / 0.038508 (0.025225) | 0.030377 / 0.023109 (0.007268) | 0.242962 / 0.275898 (-0.032936) | 0.262865 / 0.323480 (-0.060615) | 0.004760 / 0.007986 (-0.003225) | 0.002772 / 0.004328 (-0.001557) | 0.049094 / 0.004250 (0.044843) | 0.041093 / 0.037052 (0.004041) | 0.260423 / 0.258489 (0.001934) | 0.283908 / 0.293841 (-0.009933) | 0.027409 / 0.128546 (-0.101138) | 0.010548 / 0.075646 (-0.065098) | 0.208637 / 0.419271 (-0.210634) | 0.035386 / 0.043533 (-0.008147) | 0.242352 / 0.255139 (-0.012787) | 0.264201 / 0.283200 (-0.018999) | 0.017822 / 0.141683 (-0.123860) | 1.140792 / 1.452155 (-0.311363) | 1.166782 / 1.492716 (-0.325934) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094727 / 0.018006 (0.076720) | 0.308548 / 0.000490 (0.308059) | 0.000213 / 0.000200 (0.000013) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018106 / 0.037411 (-0.019305) | 0.062057 / 0.014526 (0.047531) | 0.073821 / 0.176557 (-0.102735) | 0.121269 / 0.737135 (-0.615867) | 0.074062 / 0.296338 (-0.222277) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282978 / 0.215209 (0.067768) | 2.788626 / 2.077655 (0.710971) | 1.479756 / 1.504120 (-0.024364) | 1.360620 / 1.541195 (-0.180575) | 1.363996 / 1.468490 (-0.104494) | 0.571646 / 4.584777 (-4.013131) | 2.430630 / 3.745712 (-1.315083) | 2.783909 / 5.269862 (-2.485953) | 1.744617 / 4.565676 (-2.821060) | 0.062771 / 0.424275 (-0.361504) | 0.004978 / 0.007607 (-0.002629) | 0.347929 / 0.226044 (0.121884) | 3.368837 / 2.268929 (1.099908) | 1.855635 / 55.444624 (-53.588990) | 1.581555 / 6.876477 (-5.294922) | 1.589888 / 2.142072 (-0.552184) | 0.655821 / 4.805227 (-4.149406) | 0.118990 / 6.500664 (-6.381674) | 0.042191 / 0.075469 (-0.033278) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.991099 / 1.841788 (-0.850688) | 11.627919 / 8.074308 (3.553611) | 9.554180 / 10.191392 (-0.637212) | 0.140541 / 0.680424 (-0.539882) | 0.014264 / 0.534201 (-0.519937) | 0.288465 / 0.579283 (-0.290818) | 0.266400 / 0.434364 (-0.167964) | 0.324400 / 0.540337 (-0.215938) | 0.423158 / 1.386936 (-0.963778) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005588 / 0.011353 (-0.005765) | 0.003784 / 0.011008 (-0.007224) | 0.049961 / 0.038508 (0.011453) | 0.031215 / 0.023109 (0.008105) | 0.280859 / 0.275898 (0.004961) | 0.306416 / 0.323480 (-0.017063) | 0.004310 / 0.007986 (-0.003676) | 0.002884 / 0.004328 (-0.001445) | 0.049662 / 0.004250 (0.045412) | 0.046611 / 0.037052 (0.009559) | 0.293353 / 0.258489 (0.034864) | 0.327839 / 0.293841 (0.033998) | 0.050784 / 0.128546 (-0.077763) | 0.010890 / 0.075646 (-0.064757) | 0.059612 / 0.419271 (-0.359659) | 0.033175 / 0.043533 (-0.010358) | 0.281085 / 0.255139 (0.025946) | 0.302746 / 0.283200 (0.019547) | 0.019201 / 0.141683 (-0.122481) | 1.126722 / 1.452155 (-0.325433) | 1.225678 / 1.492716 (-0.267038) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094335 / 0.018006 (0.076329) | 0.304774 / 0.000490 (0.304285) | 0.000207 / 0.000200 (0.000007) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021648 / 0.037411 (-0.015763) | 0.077920 / 0.014526 (0.063394) | 0.087125 / 0.176557 (-0.089432) | 0.125481 / 0.737135 (-0.611654) | 0.089415 / 0.296338 (-0.206924) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.304955 / 0.215209 (0.089746) | 2.992587 / 2.077655 (0.914932) | 1.654609 / 1.504120 (0.150490) | 1.509114 / 1.541195 (-0.032081) | 1.530906 / 1.468490 (0.062416) | 0.572092 / 4.584777 (-4.012685) | 2.477902 / 3.745712 (-1.267810) | 2.731363 / 5.269862 (-2.538498) | 1.750000 / 4.565676 (-2.815677) | 0.063662 / 0.424275 (-0.360613) | 0.005008 / 0.007607 (-0.002600) | 0.353066 / 0.226044 (0.127022) | 3.528309 / 2.268929 (1.259380) | 2.009238 / 55.444624 (-53.435387) | 1.717792 / 6.876477 (-5.158685) | 1.861699 / 2.142072 (-0.280373) | 0.667392 / 4.805227 (-4.137835) | 0.119197 / 6.500664 (-6.381467) | 0.041131 / 0.075469 (-0.034338) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.032182 / 1.841788 (-0.809605) | 12.042613 / 8.074308 (3.968305) | 10.256293 / 10.191392 (0.064901) | 0.141180 / 0.680424 (-0.539244) | 0.015005 / 0.534201 (-0.519196) | 0.290081 / 0.579283 (-0.289202) | 0.281081 / 0.434364 (-0.153283) | 0.331425 / 0.540337 (-0.208912) | 0.418674 / 1.386936 (-0.968262) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ad5b221c01a183a66cbf52a6d708f94e0cff0b53 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6693/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6693 | https://github.com/huggingface/datasets/pull/6693 | true |
2,152,270,987 | https://api.github.com/repos/huggingface/datasets/issues/6692/labels{/name} | Fix #6691 | 2024-02-26T15:33:50Z | 6,692 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-24T11:38:59Z | https://api.github.com/repos/huggingface/datasets/issues/6692/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6692/timeline | Enhancement: Enable loading TSV files in load_dataset() | https://api.github.com/repos/huggingface/datasets/issues/6692/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/77767961?v=4",
"events_url": "https://api.github.com/users/harsh1504660/events{/privacy}",
"followers_url": "https://api.github.com/users/harsh1504660/followers",
"following_url": "https://api.github.com/users/harsh1504660/following{/other_user}",
"gists_url": "https://api.github.com/users/harsh1504660/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/harsh1504660",
"id": 77767961,
"login": "harsh1504660",
"node_id": "MDQ6VXNlcjc3NzY3OTYx",
"organizations_url": "https://api.github.com/users/harsh1504660/orgs",
"received_events_url": "https://api.github.com/users/harsh1504660/received_events",
"repos_url": "https://api.github.com/users/harsh1504660/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/harsh1504660/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harsh1504660/subscriptions",
"type": "User",
"url": "https://api.github.com/users/harsh1504660"
} | [] | null | null | NONE | 2024-02-26T07:14:03Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6692.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6692",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6692.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6692"
} | PR_kwDODunzps5n0XN1 | [
"Hi @harsh1504660,\r\n\r\nThanks for your work, but this functionality already exists. See my comment in the corresponding issue: https://github.com/huggingface/datasets/issues/6691#issuecomment-1963449923\r\n\r\nNext time you would like to contribute, I would suggest you take on an issue that is previously validated by one of the maintainers. Thanks anyway."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6692/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6692 | https://github.com/huggingface/datasets/pull/6692 | true |
2,152,134,041 | https://api.github.com/repos/huggingface/datasets/issues/6691/labels{/name} | ### Feature request
the load_dataset() for local functions support file types like csv, json etc but not of type tsv (tab separated values).
### Motivation
cant easily load files of type tsv, have to convert them to another type like csv then load
### Your contribution
Can try by raising a PR with a little help, currently went through the code but didn't fully understand | 2024-02-26T07:15:07Z | 6,691 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-02-24T05:56:04Z | https://api.github.com/repos/huggingface/datasets/issues/6691/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/77767961?v=4",
"events_url": "https://api.github.com/users/harsh1504660/events{/privacy}",
"followers_url": "https://api.github.com/users/harsh1504660/followers",
"following_url": "https://api.github.com/users/harsh1504660/following{/other_user}",
"gists_url": "https://api.github.com/users/harsh1504660/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/harsh1504660",
"id": 77767961,
"login": "harsh1504660",
"node_id": "MDQ6VXNlcjc3NzY3OTYx",
"organizations_url": "https://api.github.com/users/harsh1504660/orgs",
"received_events_url": "https://api.github.com/users/harsh1504660/received_events",
"repos_url": "https://api.github.com/users/harsh1504660/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/harsh1504660/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harsh1504660/subscriptions",
"type": "User",
"url": "https://api.github.com/users/harsh1504660"
} | https://api.github.com/repos/huggingface/datasets/issues/6691/timeline | load_dataset() does not support tsv | https://api.github.com/repos/huggingface/datasets/issues/6691/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/26873178?v=4",
"events_url": "https://api.github.com/users/dipsivenkatesh/events{/privacy}",
"followers_url": "https://api.github.com/users/dipsivenkatesh/followers",
"following_url": "https://api.github.com/users/dipsivenkatesh/following{/other_user}",
"gists_url": "https://api.github.com/users/dipsivenkatesh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dipsivenkatesh",
"id": 26873178,
"login": "dipsivenkatesh",
"node_id": "MDQ6VXNlcjI2ODczMTc4",
"organizations_url": "https://api.github.com/users/dipsivenkatesh/orgs",
"received_events_url": "https://api.github.com/users/dipsivenkatesh/received_events",
"repos_url": "https://api.github.com/users/dipsivenkatesh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dipsivenkatesh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dipsivenkatesh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dipsivenkatesh"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/77767961?v=4",
"events_url": "https://api.github.com/users/harsh1504660/events{/privacy}",
"followers_url": "https://api.github.com/users/harsh1504660/followers",
"following_url": "https://api.github.com/users/harsh1504660/following{/other_user}",
"gists_url": "https://api.github.com/users/harsh1504660/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/harsh1504660",
"id": 77767961,
"login": "harsh1504660",
"node_id": "MDQ6VXNlcjc3NzY3OTYx",
"organizations_url": "https://api.github.com/users/harsh1504660/orgs",
"received_events_url": "https://api.github.com/users/harsh1504660/received_events",
"repos_url": "https://api.github.com/users/harsh1504660/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/harsh1504660/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harsh1504660/subscriptions",
"type": "User",
"url": "https://api.github.com/users/harsh1504660"
}
] | null | completed | NONE | 2024-02-26T07:09:35Z | null | I_kwDODunzps6ARvWZ | [
"#self-assign",
"Hi @dipsivenkatesh,\r\n\r\nPlease note that this functionality is already implemented. Our CSV builder uses `pandas.read_csv` under the hood, and you can pass the parameter `delimiter=\"\\t\"` to read TSV files.\r\n\r\nSee the list of CSV config parameters in our docs: https://huggingface.co/docs/datasets/package_reference/loading_methods#datasets.packaged_modules.csv.CsvConfig"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6691/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6691 | https://github.com/huggingface/datasets/issues/6691 | false |
2,150,800,065 | https://api.github.com/repos/huggingface/datasets/issues/6690/labels{/name} | Add function to convert a script-dataset to Parquet and push it to the Hub, analogously to the Space: "Convert a Hugging Face dataset to Parquet" | 2024-02-23T10:28:20Z | 6,690 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-02-23T10:28:20Z | https://api.github.com/repos/huggingface/datasets/issues/6690/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/6690/timeline | Add function to convert a script-dataset to Parquet | https://api.github.com/repos/huggingface/datasets/issues/6690/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | null | MEMBER | null | null | I_kwDODunzps6AMprB | [] | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6690/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6690 | https://github.com/huggingface/datasets/issues/6690 | false |
2,149,581,147 | https://api.github.com/repos/huggingface/datasets/issues/6689/labels{/name} | ### Describe the bug
Regardless of what method I use, datasets defaults to zstandard for unpacking my datasets.
This is poor behavior, because not only is zstandard not a dependency in the huggingface package (and therefore, your dataset loading will be interrupted while it asks you to install the package), but it happens on datasets that are uploaded in json format too, meaning the dataset loader will attempt to convert the data to a zstandard compatible format, and THEN try to unpackage it.
My 4tb drive runs out of room when using zstandard on slimpajama. It loads fine on 1.5tb when using json, however I lack the understanding of the "magic numbers" system used to select the unpackaging algorithm, so I can't push a change myself.
Commenting out this line, in "/datasets/utils/extract.py" fixes the issue, and causes SlimPajama to properly extract using rational amounts of storage, however it completely disables zstandard, which is probably undesirable behavior. Someone with an understanding of the "magic numbers" system should probably take a pass over this issue.
```
class Extractor:
# Put zip file to the last, b/c it is possible wrongly detected as zip (I guess it means: as tar or gzip)
extractors: Dict[str, Type[BaseExtractor]] = {
"tar": TarExtractor,
"gzip": GzipExtractor,
"zip": ZipExtractor,
"xz": XzExtractor,
#"zstd": ZstdExtractor, # This line needs to go, in order for datasets to work w/o non-dependent packages
"rar": RarExtractor,
"bz2": Bzip2Extractor,
"7z": SevenZipExtractor, # <Added version="2.4.0"/>
"lz4": Lz4Extractor, # <Added version="2.4.0"/>
}
```
### Steps to reproduce the bug
'''
from datasaets import load_dataset
load_dataset(path="/cerebras/SlimPajama-627B")
'''
This alone should trigger the error on any system that does not have zstandard pip installed.
### Expected behavior
This repository (which is encoded in json format, not zstandard) should check whether zstandard is installed before defaulting to it. Additionally, using zstandard should not use more than 3x the required space that other extraction mechanisms use.
### Environment info
- `datasets` version: 2.17.1
- Platform: Linux-6.5.0-18-generic-x86_64-with-glibc2.35
- Python version: 3.12.0
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | 2024-03-07T14:54:16Z | 6,689 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-02-22T17:39:27Z | https://api.github.com/repos/huggingface/datasets/issues/6689/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6689/timeline | .load_dataset() method defaults to zstandard | https://api.github.com/repos/huggingface/datasets/issues/6689/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/87243032?v=4",
"events_url": "https://api.github.com/users/ElleLeonne/events{/privacy}",
"followers_url": "https://api.github.com/users/ElleLeonne/followers",
"following_url": "https://api.github.com/users/ElleLeonne/following{/other_user}",
"gists_url": "https://api.github.com/users/ElleLeonne/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ElleLeonne",
"id": 87243032,
"login": "ElleLeonne",
"node_id": "MDQ6VXNlcjg3MjQzMDMy",
"organizations_url": "https://api.github.com/users/ElleLeonne/orgs",
"received_events_url": "https://api.github.com/users/ElleLeonne/received_events",
"repos_url": "https://api.github.com/users/ElleLeonne/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ElleLeonne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ElleLeonne/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ElleLeonne"
} | [] | null | completed | NONE | 2024-03-07T14:54:15Z | null | I_kwDODunzps6AIAFb | [
"The dataset is made of JSON files compressed using zstandard, as you can see here: https://huggingface.co/datasets/cerebras/SlimPajama-627B/tree/main/test/chunk1\r\n\r\nThat's why it asks for zstandard to be installed.\r\n\r\nThough I'm intrigued that you manage to load the dataset without zstandard installed. Maybe `pyarrow` that we use to load JSON data under the hood got support for zstandard at one point.",
"> The dataset is made of JSON files compressed using zstandard, as you can see here: https://huggingface.co/datasets/cerebras/SlimPajama-627B/tree/main/test/chunk1\r\n> \r\n> That's why it asks for zstandard to be installed.\r\n> \r\n> Though I'm intrigued that you manage to load the dataset without zstandard installed. Maybe `pyarrow` that we use to load JSON data under the hood got support for zstandard at one point.\r\n\r\nQuestion, then.\r\n\r\nWhen I loaded this dataset back in October, it downloaded all the files, and then loaded into memory just fine.\r\n\r\nNOW, it has to sit there and unpack all these zstd files (3.6TB worth). Further, when they're in my harddrive, they're regular json files. It's only when looking at the LFS, or when the loading script runs, that I get asked to install zstd.\r\n\r\nMy question is, **is this normal?** As far as I can tell, there's no reason the dataset or the loading methods should have changed between then and now. Was my old behavior flawed, and the new behavior correct?\r\n\r\nI mean, I got it working eventually, but it was pulling teeth, and it still doesn't load right, as I had to unpack each chunk separately, so there's no clean mapping between the chunks and the broader dataset.",
"The `ZstdExtractor` has been added 3 years ago and we haven't touched it since then. Same for the JSON loader.\r\n\r\n`zstandard` is required as soon as you try to load a file with the `.zstd` extension or if a file starts with the Zstandard magic number `b\"\\x28\\xb5\\x2f\\xfd\"` (used to recognize Zstandard files).\r\n\r\nNote that the extraction only has to happen once - if you reload the dataset it will be reloaded from your cache directly.\r\n\r\nNot sure what happened between October and now unfortunately",
"Understood, thank you for clarifying that for me.\r\n\r\nI'll look into how best to collate my stack of batches w/o creating duplicate arrow tables for each one."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6689/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6689 | https://github.com/huggingface/datasets/issues/6689 | false |
2,148,609,859 | https://api.github.com/repos/huggingface/datasets/issues/6688/labels{/name} | ### Describe the bug
I don't know if it is a bug or an expected behavior, but the tensor type seems to be ignored after applying map. For example, mapping over to tokenize text with a transformers' tokenizer always returns lists and it ignore the `return_tensors` argument.
If this is an expected behaviour (e.g., for caching/Arrow compatibility/etc.) it should be clearly documented. For example, current documentation (see [here](https://huggingface.co/docs/datasets/v2.17.1/en/nlp_process#map)) clearly state to "set `return_tensors="np"` when you tokenize your text" to have Numpy arrays.
### Steps to reproduce the bug
```py
# %%%
import datasets
import numpy as np
import tensorflow as tf
import torch
from transformers import AutoTokenizer
# %%
ds = datasets.load_dataset("cnn_dailymail", "1.0.0", split="train[:1%]")
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
#%%
for return_tensors in [None, "np", "pt", "tf", "jax"]:
print(f"********** no map, return_tensors={return_tensors} **********")
_ds = tokenizer(ds["article"], return_tensors=return_tensors, truncation=True, padding=True)
print('Type <input_ids>:', type(_ds["input_ids"]))
# %%
for return_tensors in [None, "np", "pt", "tf", "jax"]:
print(f"********** map, return_tensors={return_tensors} **********")
_ds = ds.map(
lambda examples: tokenizer(examples["article"], return_tensors=return_tensors, truncation=True, padding=True),
batched=True,
remove_columns=["article"],
)
print('Type <input_ids>:', type(_ds[0]["input_ids"]))
```
### Expected behavior
The output from the script above. I would expect the second half to be the same.
```
********** no map, return_tensors=None **********
Type <input_ids>: <class 'list'>
********** no map, return_tensors=np **********
Type <input_ids>: <class 'numpy.ndarray'>
********** no map, return_tensors=pt **********
Type <input_ids>: <class 'torch.Tensor'>
********** no map, return_tensors=tf **********
Type <input_ids>: <class 'tensorflow.python.framework.ops.EagerTensor'>
********** no map, return_tensors=jax **********
Type <input_ids>: <class 'jaxlib.xla_extension.ArrayImpl'>
********** map, return_tensors=None **********
Type <input_ids>: <class 'list'>
********** map, return_tensors=np **********
Type <input_ids>: <class 'list'>
********** map, return_tensors=pt **********
Type <input_ids>: <class 'list'>
********** map, return_tensors=tf **********
Type <input_ids>: <class 'list'>
********** map, return_tensors=jax **********
Type <input_ids>: <class 'list'>
```
### Environment info
- `datasets` version: 2.17.1
- Platform: Redacted (linux)
- Python version: 3.10.12
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.1.3
- `fsspec` version: 2023.10.0 | 2024-02-22T15:56:21Z | 6,688 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-02-22T09:27:57Z | https://api.github.com/repos/huggingface/datasets/issues/6688/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6688/timeline | Tensor type (e.g. from `return_tensors`) ignored in map | https://api.github.com/repos/huggingface/datasets/issues/6688/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/11166137?v=4",
"events_url": "https://api.github.com/users/srossi93/events{/privacy}",
"followers_url": "https://api.github.com/users/srossi93/followers",
"following_url": "https://api.github.com/users/srossi93/following{/other_user}",
"gists_url": "https://api.github.com/users/srossi93/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/srossi93",
"id": 11166137,
"login": "srossi93",
"node_id": "MDQ6VXNlcjExMTY2MTM3",
"organizations_url": "https://api.github.com/users/srossi93/orgs",
"received_events_url": "https://api.github.com/users/srossi93/received_events",
"repos_url": "https://api.github.com/users/srossi93/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/srossi93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srossi93/subscriptions",
"type": "User",
"url": "https://api.github.com/users/srossi93"
} | [] | null | null | NONE | null | null | I_kwDODunzps6AES9D | [
"Hi, this is expected behavior since all the tensors are converted to Arrow data (the storage type behind a Dataset).\r\n\r\nTo get pytorch tensors back, you can set the dataset format to \"torch\":\r\n\r\n```python\r\nds = ds.with_format(\"torch\")\r\n```",
"Thanks. Just one additional question. During the pipeline `<framework> -> arrow -> <framework>`, does `.with_format` zero-copies the tensors or is it a deep copy? And is this behavior framework-dependent?\r\n\r\nThanks again.",
"We do zero-copy Arrow <-> NumPy <-> PyTorch when the output dtype matches the original dtype, but for other frameworks it depends. For example JAX doesn't allow zero-copy NumPy -> JAX at all IIRC.\r\n\r\nCurrently tokenized data are formatted using a copy though, since tokens are stored as int32 and returned as int64 torch tensors."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6688/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6688 | https://github.com/huggingface/datasets/issues/6688 | false |
2,148,554,178 | https://api.github.com/repos/huggingface/datasets/issues/6687/labels{/name} | - adds support for the `fs.glob` changes introduced in `fsspec==2023.12.0` and unpins the current upper bound
Should close #6644
Should close #6645
The `test_data_files` glob/pattern tests pass for me in:
- `fsspec==2023.10.0` (the pinned max version in datasets `main`)
- `fsspec==2023.12.0` (#6644)
- `fsspec==2024.2.0` (#6645) | 2024-03-04T12:59:42Z | 6,687 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-22T08:59:32Z | https://api.github.com/repos/huggingface/datasets/issues/6687/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6687/timeline | fsspec: support fsspec>=2023.12.0 glob changes | https://api.github.com/repos/huggingface/datasets/issues/6687/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/651988?v=4",
"events_url": "https://api.github.com/users/pmrowla/events{/privacy}",
"followers_url": "https://api.github.com/users/pmrowla/followers",
"following_url": "https://api.github.com/users/pmrowla/following{/other_user}",
"gists_url": "https://api.github.com/users/pmrowla/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pmrowla",
"id": 651988,
"login": "pmrowla",
"node_id": "MDQ6VXNlcjY1MTk4OA==",
"organizations_url": "https://api.github.com/users/pmrowla/orgs",
"received_events_url": "https://api.github.com/users/pmrowla/received_events",
"repos_url": "https://api.github.com/users/pmrowla/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pmrowla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pmrowla/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pmrowla"
} | [] | null | null | CONTRIBUTOR | 2024-02-29T15:12:17Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6687.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6687",
"merged_at": "2024-02-29T15:12:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6687.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6687"
} | PR_kwDODunzps5nnqBB | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6687). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Looking into the CI failure, this PR is incompatible with `huggingface-hub>=0.20.0`. It looks like there were several changes made to HfFileSystem in 0.20.0, @lhoestq any ideas on what the issue might be in particular?\r\n\r\na bisect indicates that it's related to https://github.com/huggingface/huggingface_hub/pull/1815",
"It looks like huggingface-hub's `HfFileSystem.glob` is broken for exact string matches (that don't contain glob wildcards) when combining `huggingface-hub>=0.20.0` and `fsspec>=2023.12.0`.\r\n\r\nI did a quick test with huggingface-hub `main`, and adding this test case to `tests/test_hf_filesystem::HfFileSystemTests::test_glob` (https://github.com/huggingface/huggingface_hub/blob/main/tests/test_hf_file_system.py) passes with `fsspec==2023.10.0` and fails with `fsspec==2023.12.0`\r\n```python\r\n self.assertEqual(\r\n sorted(self.hffs.glob(self.hf_path + \"/.gitattributes\")),\r\n sorted([self.hf_path + \"/.gitattributes\"]),\r\n )\r\n\r\n```\r\n\r\nthe `hffs.glob()` call with a pattern that does not contain any wildcards returns an empty list:\r\n```\r\nE AssertionError: Lists differ: [] != ['datasets/__DUMMY_TRANSFORMERS_USER__/rep[35 chars]tes']\r\nE\r\nE Second list contains 1 additional elements.\r\nE First extra element 0:\r\nE 'datasets/__DUMMY_TRANSFORMERS_USER__/repo-7d0ae9-17091013467064/.gitattributes'\r\nE\r\nE - []\r\nE + ['datasets/__DUMMY_TRANSFORMERS_USER__/repo-7d0ae9-17091013467064/.gitattributes']\r\n```\r\n(and with the compatible/passing older fsspec versions the glob call returns the single exact file match as expected)\r\n\r\nSo it looks like the CI failure here isn't directly related to this PR. The failing patterns that don't contain any `*` wildcards are generated by `datasets` with or without this PR, but now that this PR installs the incompatible fsspec version, the underlying `HfFileSystem.glob()` call ends up failing.",
"I just opened https://github.com/huggingface/huggingface_hub/pull/2056 to fix this.\r\n\r\nDo you mind if I continue this PR to run the CI against `huggingface_hub@main` until the fix is released ?\r\n\r\nEDIT: the fix has been released in `huggingface_hub` 0.21.2 - I removed my commits that were using `huggingface_hub@main`",
"I just added two additional patterns to cover cases like `test-data/xxx.csv` and `data-test/xxx.csv`",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005461 / 0.011353 (-0.005892) | 0.003861 / 0.011008 (-0.007148) | 0.063252 / 0.038508 (0.024744) | 0.031474 / 0.023109 (0.008364) | 0.250321 / 0.275898 (-0.025577) | 0.275198 / 0.323480 (-0.048282) | 0.003275 / 0.007986 (-0.004710) | 0.002874 / 0.004328 (-0.001454) | 0.049499 / 0.004250 (0.045248) | 0.045334 / 0.037052 (0.008282) | 0.266347 / 0.258489 (0.007858) | 0.308974 / 0.293841 (0.015133) | 0.027742 / 0.128546 (-0.100804) | 0.010274 / 0.075646 (-0.065373) | 0.207516 / 0.419271 (-0.211755) | 0.036538 / 0.043533 (-0.006995) | 0.247949 / 0.255139 (-0.007190) | 0.268986 / 0.283200 (-0.014214) | 0.019842 / 0.141683 (-0.121841) | 1.117547 / 1.452155 (-0.334607) | 1.175813 / 1.492716 (-0.316903) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.103661 / 0.018006 (0.085655) | 0.331023 / 0.000490 (0.330534) | 0.000240 / 0.000200 (0.000040) | 0.000041 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019767 / 0.037411 (-0.017645) | 0.061500 / 0.014526 (0.046974) | 0.075899 / 0.176557 (-0.100658) | 0.122240 / 0.737135 (-0.614895) | 0.074621 / 0.296338 (-0.221717) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287501 / 0.215209 (0.072292) | 2.794737 / 2.077655 (0.717082) | 1.505362 / 1.504120 (0.001242) | 1.379481 / 1.541195 (-0.161713) | 1.394836 / 1.468490 (-0.073654) | 0.545803 / 4.584777 (-4.038974) | 2.364167 / 3.745712 (-1.381545) | 2.800923 / 5.269862 (-2.468939) | 1.723910 / 4.565676 (-2.841766) | 0.061270 / 0.424275 (-0.363005) | 0.005006 / 0.007607 (-0.002601) | 0.334952 / 0.226044 (0.108908) | 3.367122 / 2.268929 (1.098194) | 1.839822 / 55.444624 (-53.604803) | 1.553774 / 6.876477 (-5.322703) | 1.583585 / 2.142072 (-0.558488) | 0.624680 / 4.805227 (-4.180547) | 0.116364 / 6.500664 (-6.384300) | 0.042412 / 0.075469 (-0.033057) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975207 / 1.841788 (-0.866580) | 11.843126 / 8.074308 (3.768818) | 9.418537 / 10.191392 (-0.772855) | 0.130648 / 0.680424 (-0.549775) | 0.013747 / 0.534201 (-0.520454) | 0.288195 / 0.579283 (-0.291088) | 0.269861 / 0.434364 (-0.164503) | 0.326732 / 0.540337 (-0.213606) | 0.441256 / 1.386936 (-0.945680) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005185 / 0.011353 (-0.006168) | 0.003836 / 0.011008 (-0.007172) | 0.050057 / 0.038508 (0.011549) | 0.030929 / 0.023109 (0.007820) | 0.263558 / 0.275898 (-0.012340) | 0.284553 / 0.323480 (-0.038927) | 0.004331 / 0.007986 (-0.003655) | 0.002815 / 0.004328 (-0.001513) | 0.050187 / 0.004250 (0.045936) | 0.048431 / 0.037052 (0.011379) | 0.271005 / 0.258489 (0.012515) | 0.304749 / 0.293841 (0.010908) | 0.029286 / 0.128546 (-0.099260) | 0.010598 / 0.075646 (-0.065048) | 0.058111 / 0.419271 (-0.361160) | 0.053665 / 0.043533 (0.010132) | 0.257574 / 0.255139 (0.002436) | 0.285802 / 0.283200 (0.002602) | 0.018917 / 0.141683 (-0.122766) | 1.206517 / 1.452155 (-0.245638) | 1.220572 / 1.492716 (-0.272144) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.122466 / 0.018006 (0.104460) | 0.567887 / 0.000490 (0.567397) | 0.000321 / 0.000200 (0.000121) | 0.000049 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022120 / 0.037411 (-0.015292) | 0.075456 / 0.014526 (0.060931) | 0.086385 / 0.176557 (-0.090171) | 0.126045 / 0.737135 (-0.611091) | 0.087502 / 0.296338 (-0.208837) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.304847 / 0.215209 (0.089638) | 3.008095 / 2.077655 (0.930441) | 1.726178 / 1.504120 (0.222058) | 1.592332 / 1.541195 (0.051138) | 1.603714 / 1.468490 (0.135224) | 0.576875 / 4.584777 (-4.007902) | 2.450884 / 3.745712 (-1.294828) | 2.719073 / 5.269862 (-2.550789) | 1.775261 / 4.565676 (-2.790415) | 0.063144 / 0.424275 (-0.361131) | 0.005122 / 0.007607 (-0.002485) | 0.350004 / 0.226044 (0.123960) | 3.467146 / 2.268929 (1.198218) | 2.062907 / 55.444624 (-53.381717) | 1.798793 / 6.876477 (-5.077684) | 1.921204 / 2.142072 (-0.220868) | 0.651832 / 4.805227 (-4.153396) | 0.122326 / 6.500664 (-6.378338) | 0.041396 / 0.075469 (-0.034073) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.024859 / 1.841788 (-0.816928) | 12.569744 / 8.074308 (4.495436) | 10.448487 / 10.191392 (0.257095) | 0.131529 / 0.680424 (-0.548895) | 0.014853 / 0.534201 (-0.519348) | 0.287683 / 0.579283 (-0.291600) | 0.289814 / 0.434364 (-0.144550) | 0.323935 / 0.540337 (-0.216403) | 0.425208 / 1.386936 (-0.961728) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ba71e92c59c9bd9d1ee6168691977f0c4728ed6e \"CML watermark\")\n",
"> EDIT: the fix has been released in `huggingface_hub` 0.21.2 - I removed my commits that were using `huggingface_hub@main`\r\n\r\nPlease note that people using `huggingface_hub` < 0.21.2 and latest `fsspec` will have issues when using `datasets`:\r\n- https://github.com/huggingface/lighteval/actions/runs/8139147047/job/22241658122?pr=86\r\n- https://github.com/huggingface/lighteval/pull/84\r\n\r\nCC: @clefourrier \r\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 5,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6687/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6687 | https://github.com/huggingface/datasets/pull/6687 | true |
2,147,795,103 | https://api.github.com/repos/huggingface/datasets/issues/6686/labels{/name} | I am uploading an image dataset like this:
```
dataset = load_dataset(
"json",
data_files={"train": "data/custom_dataset/train.json", "validation": "data/custom_dataset/val.json"},
)
dataset = dataset.cast_column("images", Sequence(Image()))
dataset.push_to_hub("StanfordAIMI/custom_dataset", max_shard_size="1GB")
```
where it takes a long time in the `Map` process. Do you think I can use multi-processing to map all the image data to the memory first? For the `Map()` function, I can set `num_proc`. But for `push_to_hub` and `cast_column`, I can not find it.
Thanks in advance!
Best, | 2024-02-21T22:07:21Z | 6,686 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-02-21T22:07:21Z | https://api.github.com/repos/huggingface/datasets/issues/6686/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6686/timeline | Question: Is there any way for uploading a large image dataset? | https://api.github.com/repos/huggingface/datasets/issues/6686/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/37367987?v=4",
"events_url": "https://api.github.com/users/zhjohnchan/events{/privacy}",
"followers_url": "https://api.github.com/users/zhjohnchan/followers",
"following_url": "https://api.github.com/users/zhjohnchan/following{/other_user}",
"gists_url": "https://api.github.com/users/zhjohnchan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zhjohnchan",
"id": 37367987,
"login": "zhjohnchan",
"node_id": "MDQ6VXNlcjM3MzY3OTg3",
"organizations_url": "https://api.github.com/users/zhjohnchan/orgs",
"received_events_url": "https://api.github.com/users/zhjohnchan/received_events",
"repos_url": "https://api.github.com/users/zhjohnchan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zhjohnchan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhjohnchan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zhjohnchan"
} | [] | null | null | NONE | null | null | I_kwDODunzps6ABMCf | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6686/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6686 | https://github.com/huggingface/datasets/issues/6686 | false |
2,145,570,006 | https://api.github.com/repos/huggingface/datasets/issues/6685/labels{/name} | Fixed Quickstart Notebook Link in the [Overview notebook](https://github.com/huggingface/datasets/blob/main/notebooks/Overview.ipynb) | 2024-03-12T21:31:04Z | 6,685 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-21T01:04:18Z | https://api.github.com/repos/huggingface/datasets/issues/6685/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6685/timeline | Updated Quickstart Notebook link | https://api.github.com/repos/huggingface/datasets/issues/6685/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/55932554?v=4",
"events_url": "https://api.github.com/users/Codeblockz/events{/privacy}",
"followers_url": "https://api.github.com/users/Codeblockz/followers",
"following_url": "https://api.github.com/users/Codeblockz/following{/other_user}",
"gists_url": "https://api.github.com/users/Codeblockz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Codeblockz",
"id": 55932554,
"login": "Codeblockz",
"node_id": "MDQ6VXNlcjU1OTMyNTU0",
"organizations_url": "https://api.github.com/users/Codeblockz/orgs",
"received_events_url": "https://api.github.com/users/Codeblockz/received_events",
"repos_url": "https://api.github.com/users/Codeblockz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Codeblockz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Codeblockz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Codeblockz"
} | [] | null | null | CONTRIBUTOR | 2024-02-25T18:48:08Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6685.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6685",
"merged_at": "2024-02-25T18:48:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6685.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6685"
} | PR_kwDODunzps5ndZQa | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6685). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005386 / 0.011353 (-0.005967) | 0.003707 / 0.011008 (-0.007301) | 0.062661 / 0.038508 (0.024153) | 0.029058 / 0.023109 (0.005949) | 0.249669 / 0.275898 (-0.026230) | 0.280996 / 0.323480 (-0.042484) | 0.004041 / 0.007986 (-0.003945) | 0.002713 / 0.004328 (-0.001616) | 0.047914 / 0.004250 (0.043664) | 0.042014 / 0.037052 (0.004961) | 0.265209 / 0.258489 (0.006720) | 0.297320 / 0.293841 (0.003479) | 0.028323 / 0.128546 (-0.100223) | 0.010844 / 0.075646 (-0.064802) | 0.205895 / 0.419271 (-0.213377) | 0.035997 / 0.043533 (-0.007536) | 0.245069 / 0.255139 (-0.010070) | 0.266159 / 0.283200 (-0.017040) | 0.017590 / 0.141683 (-0.124093) | 1.132046 / 1.452155 (-0.320109) | 1.177496 / 1.492716 (-0.315220) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.105441 / 0.018006 (0.087435) | 0.301321 / 0.000490 (0.300831) | 0.000211 / 0.000200 (0.000011) | 0.000064 / 0.000054 (0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018687 / 0.037411 (-0.018724) | 0.061221 / 0.014526 (0.046695) | 0.072556 / 0.176557 (-0.104001) | 0.119641 / 0.737135 (-0.617495) | 0.073781 / 0.296338 (-0.222557) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284564 / 0.215209 (0.069354) | 2.795786 / 2.077655 (0.718131) | 1.437059 / 1.504120 (-0.067061) | 1.309319 / 1.541195 (-0.231876) | 1.315849 / 1.468490 (-0.152641) | 0.578571 / 4.584777 (-4.006206) | 2.350754 / 3.745712 (-1.394958) | 2.758499 / 5.269862 (-2.511362) | 1.705545 / 4.565676 (-2.860131) | 0.063660 / 0.424275 (-0.360615) | 0.005506 / 0.007607 (-0.002101) | 0.334915 / 0.226044 (0.108871) | 3.295922 / 2.268929 (1.026994) | 1.796513 / 55.444624 (-53.648111) | 1.488113 / 6.876477 (-5.388364) | 1.523042 / 2.142072 (-0.619031) | 0.648169 / 4.805227 (-4.157058) | 0.119321 / 6.500664 (-6.381343) | 0.041932 / 0.075469 (-0.033537) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982432 / 1.841788 (-0.859356) | 11.344780 / 8.074308 (3.270472) | 9.627219 / 10.191392 (-0.564173) | 0.142590 / 0.680424 (-0.537834) | 0.013899 / 0.534201 (-0.520302) | 0.286335 / 0.579283 (-0.292948) | 0.266552 / 0.434364 (-0.167812) | 0.320361 / 0.540337 (-0.219977) | 0.420303 / 1.386936 (-0.966633) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005251 / 0.011353 (-0.006102) | 0.003515 / 0.011008 (-0.007494) | 0.049344 / 0.038508 (0.010836) | 0.032055 / 0.023109 (0.008945) | 0.280653 / 0.275898 (0.004755) | 0.303989 / 0.323480 (-0.019491) | 0.004402 / 0.007986 (-0.003584) | 0.002758 / 0.004328 (-0.001570) | 0.050947 / 0.004250 (0.046697) | 0.044405 / 0.037052 (0.007353) | 0.292856 / 0.258489 (0.034367) | 0.325307 / 0.293841 (0.031466) | 0.047720 / 0.128546 (-0.080827) | 0.010589 / 0.075646 (-0.065057) | 0.057728 / 0.419271 (-0.361543) | 0.033842 / 0.043533 (-0.009691) | 0.285443 / 0.255139 (0.030304) | 0.300013 / 0.283200 (0.016814) | 0.017444 / 0.141683 (-0.124238) | 1.152880 / 1.452155 (-0.299275) | 1.200670 / 1.492716 (-0.292046) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092355 / 0.018006 (0.074349) | 0.307907 / 0.000490 (0.307418) | 0.000226 / 0.000200 (0.000026) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021624 / 0.037411 (-0.015787) | 0.075855 / 0.014526 (0.061329) | 0.087109 / 0.176557 (-0.089447) | 0.124859 / 0.737135 (-0.612276) | 0.088933 / 0.296338 (-0.207406) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294213 / 0.215209 (0.079004) | 2.893146 / 2.077655 (0.815491) | 1.595061 / 1.504120 (0.090942) | 1.480959 / 1.541195 (-0.060236) | 1.528277 / 1.468490 (0.059787) | 0.570273 / 4.584777 (-4.014504) | 2.412948 / 3.745712 (-1.332764) | 2.675009 / 5.269862 (-2.594852) | 1.724005 / 4.565676 (-2.841671) | 0.063359 / 0.424275 (-0.360916) | 0.005008 / 0.007607 (-0.002599) | 0.346570 / 0.226044 (0.120526) | 3.456566 / 2.268929 (1.187637) | 1.973109 / 55.444624 (-53.471515) | 1.657562 / 6.876477 (-5.218915) | 1.790086 / 2.142072 (-0.351986) | 0.655277 / 4.805227 (-4.149950) | 0.117985 / 6.500664 (-6.382679) | 0.041128 / 0.075469 (-0.034342) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.001428 / 1.841788 (-0.840360) | 11.953458 / 8.074308 (3.879150) | 10.188439 / 10.191392 (-0.002953) | 0.140863 / 0.680424 (-0.539561) | 0.015278 / 0.534201 (-0.518923) | 0.288193 / 0.579283 (-0.291090) | 0.281732 / 0.434364 (-0.152632) | 0.328034 / 0.540337 (-0.212304) | 0.414571 / 1.386936 (-0.972365) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#531f35e688f81ec6b4c9044856a89a6b48142bd8 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6685/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6685 | https://github.com/huggingface/datasets/pull/6685 | true |
2,144,092,388 | https://api.github.com/repos/huggingface/datasets/issues/6684/labels{/name} | Internal Slack discussion: https://huggingface.slack.com/archives/C02V51Q3800/p1708424971135029 | 2024-02-20T15:40:52Z | 6,684 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-20T10:51:27Z | https://api.github.com/repos/huggingface/datasets/issues/6684/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6684/timeline | Improve error message for gated datasets on load | https://api.github.com/repos/huggingface/datasets/issues/6684/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [] | null | null | MEMBER | 2024-02-20T15:33:56Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6684.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6684",
"merged_at": "2024-02-20T15:33:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6684.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6684"
} | PR_kwDODunzps5nYUIf | [
"Thank you ! Should we also add the link to the dataset page ?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6684). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> Thank you ! Should we also add the link to the dataset page ?\r\n\r\nGood idea! Done in https://github.com/huggingface/datasets/pull/6684/commits/4ab55210dca1815b6c2f23901598bfb29fc92a47",
"Looks like a test is failing: `test_load_dataset_cached_local_script `.\r\n\r\nApparently your new message is also shown for datasets that don't exist, which is maybe not ideal",
"Ah let me take a look!",
"> Looks like a test is failing: `test_load_dataset_cached_local_script `.\r\n> \r\n> Apparently your new message is also shown for datasets that don't exist, which is maybe not ideal\r\n\r\nFixed by reverting the error message root + added a small clarifying part",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005634 / 0.011353 (-0.005719) | 0.003786 / 0.011008 (-0.007222) | 0.064245 / 0.038508 (0.025737) | 0.031228 / 0.023109 (0.008119) | 0.248162 / 0.275898 (-0.027736) | 0.273454 / 0.323480 (-0.050026) | 0.003176 / 0.007986 (-0.004809) | 0.002814 / 0.004328 (-0.001515) | 0.049234 / 0.004250 (0.044984) | 0.046075 / 0.037052 (0.009023) | 0.262410 / 0.258489 (0.003921) | 0.290597 / 0.293841 (-0.003244) | 0.028545 / 0.128546 (-0.100001) | 0.010881 / 0.075646 (-0.064766) | 0.212098 / 0.419271 (-0.207173) | 0.036406 / 0.043533 (-0.007127) | 0.244571 / 0.255139 (-0.010568) | 0.269537 / 0.283200 (-0.013663) | 0.019574 / 0.141683 (-0.122109) | 1.120369 / 1.452155 (-0.331785) | 1.170188 / 1.492716 (-0.322529) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.108088 / 0.018006 (0.090082) | 0.299836 / 0.000490 (0.299346) | 0.000204 / 0.000200 (0.000004) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020881 / 0.037411 (-0.016531) | 0.065290 / 0.014526 (0.050764) | 0.074283 / 0.176557 (-0.102274) | 0.122189 / 0.737135 (-0.614947) | 0.077772 / 0.296338 (-0.218566) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278329 / 0.215209 (0.063120) | 2.709885 / 2.077655 (0.632230) | 1.428824 / 1.504120 (-0.075296) | 1.314338 / 1.541195 (-0.226857) | 1.349445 / 1.468490 (-0.119045) | 0.571863 / 4.584777 (-4.012914) | 2.358306 / 3.745712 (-1.387407) | 2.873498 / 5.269862 (-2.396364) | 1.779897 / 4.565676 (-2.785779) | 0.062828 / 0.424275 (-0.361447) | 0.005416 / 0.007607 (-0.002191) | 0.337645 / 0.226044 (0.111601) | 3.328868 / 2.268929 (1.059940) | 1.793387 / 55.444624 (-53.651238) | 1.539201 / 6.876477 (-5.337276) | 1.589552 / 2.142072 (-0.552520) | 0.645454 / 4.805227 (-4.159773) | 0.116966 / 6.500664 (-6.383698) | 0.043339 / 0.075469 (-0.032130) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.995743 / 1.841788 (-0.846045) | 12.096551 / 8.074308 (4.022243) | 10.214299 / 10.191392 (0.022907) | 0.133025 / 0.680424 (-0.547399) | 0.014393 / 0.534201 (-0.519808) | 0.289018 / 0.579283 (-0.290266) | 0.267879 / 0.434364 (-0.166485) | 0.324362 / 0.540337 (-0.215976) | 0.425596 / 1.386936 (-0.961340) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005739 / 0.011353 (-0.005614) | 0.003992 / 0.011008 (-0.007017) | 0.051362 / 0.038508 (0.012854) | 0.031707 / 0.023109 (0.008598) | 0.274807 / 0.275898 (-0.001091) | 0.298897 / 0.323480 (-0.024583) | 0.004363 / 0.007986 (-0.003622) | 0.002862 / 0.004328 (-0.001466) | 0.050462 / 0.004250 (0.046212) | 0.048158 / 0.037052 (0.011106) | 0.282759 / 0.258489 (0.024270) | 0.317766 / 0.293841 (0.023926) | 0.060245 / 0.128546 (-0.068301) | 0.011279 / 0.075646 (-0.064367) | 0.061175 / 0.419271 (-0.358097) | 0.035876 / 0.043533 (-0.007656) | 0.273963 / 0.255139 (0.018824) | 0.288788 / 0.283200 (0.005589) | 0.019690 / 0.141683 (-0.121992) | 1.167074 / 1.452155 (-0.285080) | 1.206344 / 1.492716 (-0.286372) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091211 / 0.018006 (0.073205) | 0.299295 / 0.000490 (0.298805) | 0.000216 / 0.000200 (0.000016) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022718 / 0.037411 (-0.014693) | 0.079483 / 0.014526 (0.064957) | 0.087437 / 0.176557 (-0.089120) | 0.126977 / 0.737135 (-0.610159) | 0.089678 / 0.296338 (-0.206660) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294719 / 0.215209 (0.079510) | 2.864505 / 2.077655 (0.786851) | 1.583993 / 1.504120 (0.079873) | 1.455079 / 1.541195 (-0.086115) | 1.504080 / 1.468490 (0.035590) | 0.569040 / 4.584777 (-4.015737) | 2.423472 / 3.745712 (-1.322240) | 2.742848 / 5.269862 (-2.527014) | 1.785244 / 4.565676 (-2.780432) | 0.062655 / 0.424275 (-0.361620) | 0.005027 / 0.007607 (-0.002580) | 0.343863 / 0.226044 (0.117818) | 3.376286 / 2.268929 (1.107358) | 1.933846 / 55.444624 (-53.510779) | 1.667316 / 6.876477 (-5.209161) | 1.815621 / 2.142072 (-0.326451) | 0.639378 / 4.805227 (-4.165850) | 0.116514 / 6.500664 (-6.384150) | 0.042191 / 0.075469 (-0.033279) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.007103 / 1.841788 (-0.834685) | 12.791193 / 8.074308 (4.716885) | 10.870575 / 10.191392 (0.679183) | 0.131040 / 0.680424 (-0.549384) | 0.016510 / 0.534201 (-0.517691) | 0.288372 / 0.579283 (-0.290911) | 0.275574 / 0.434364 (-0.158790) | 0.327801 / 0.540337 (-0.212536) | 0.415942 / 1.386936 (-0.970994) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b775390900132834e5edf487f5cbbf1299af1d88 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6684/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6684 | https://github.com/huggingface/datasets/pull/6684 | true |
2,142,751,955 | https://api.github.com/repos/huggingface/datasets/issues/6683/labels{/name} | null | 2024-02-19T17:24:25Z | 6,683 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-19T16:26:51Z | https://api.github.com/repos/huggingface/datasets/issues/6683/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6683/timeline | Fix imagefolder dataset url | https://api.github.com/repos/huggingface/datasets/issues/6683/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2024-02-19T17:18:10Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6683.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6683",
"merged_at": "2024-02-19T17:18:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6683.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6683"
} | PR_kwDODunzps5nTxGu | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6683). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005501 / 0.011353 (-0.005851) | 0.003907 / 0.011008 (-0.007101) | 0.063524 / 0.038508 (0.025016) | 0.031773 / 0.023109 (0.008664) | 0.244672 / 0.275898 (-0.031226) | 0.293342 / 0.323480 (-0.030138) | 0.004091 / 0.007986 (-0.003895) | 0.002837 / 0.004328 (-0.001491) | 0.049181 / 0.004250 (0.044930) | 0.044515 / 0.037052 (0.007462) | 0.263932 / 0.258489 (0.005443) | 0.288412 / 0.293841 (-0.005429) | 0.028338 / 0.128546 (-0.100208) | 0.010865 / 0.075646 (-0.064781) | 0.207979 / 0.419271 (-0.211293) | 0.036149 / 0.043533 (-0.007384) | 0.250674 / 0.255139 (-0.004465) | 0.263232 / 0.283200 (-0.019968) | 0.017919 / 0.141683 (-0.123763) | 1.127794 / 1.452155 (-0.324360) | 1.172071 / 1.492716 (-0.320645) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090435 / 0.018006 (0.072429) | 0.300041 / 0.000490 (0.299552) | 0.000217 / 0.000200 (0.000018) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018986 / 0.037411 (-0.018426) | 0.064872 / 0.014526 (0.050346) | 0.074738 / 0.176557 (-0.101818) | 0.121577 / 0.737135 (-0.615558) | 0.076416 / 0.296338 (-0.219923) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279471 / 0.215209 (0.064262) | 2.743066 / 2.077655 (0.665411) | 1.429511 / 1.504120 (-0.074609) | 1.315391 / 1.541195 (-0.225804) | 1.371255 / 1.468490 (-0.097235) | 0.570708 / 4.584777 (-4.014069) | 2.373047 / 3.745712 (-1.372666) | 2.813198 / 5.269862 (-2.456663) | 1.768928 / 4.565676 (-2.796749) | 0.066031 / 0.424275 (-0.358244) | 0.005074 / 0.007607 (-0.002533) | 0.333484 / 0.226044 (0.107440) | 3.295002 / 2.268929 (1.026074) | 1.796089 / 55.444624 (-53.648535) | 1.521849 / 6.876477 (-5.354627) | 1.604417 / 2.142072 (-0.537655) | 0.645235 / 4.805227 (-4.159992) | 0.119226 / 6.500664 (-6.381439) | 0.043275 / 0.075469 (-0.032194) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.986350 / 1.841788 (-0.855438) | 11.921886 / 8.074308 (3.847578) | 9.878841 / 10.191392 (-0.312551) | 0.141072 / 0.680424 (-0.539352) | 0.014514 / 0.534201 (-0.519687) | 0.304060 / 0.579283 (-0.275223) | 0.267844 / 0.434364 (-0.166520) | 0.324881 / 0.540337 (-0.215457) | 0.421426 / 1.386936 (-0.965510) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005322 / 0.011353 (-0.006030) | 0.003942 / 0.011008 (-0.007066) | 0.050629 / 0.038508 (0.012121) | 0.031176 / 0.023109 (0.008066) | 0.279627 / 0.275898 (0.003729) | 0.302667 / 0.323480 (-0.020813) | 0.004281 / 0.007986 (-0.003705) | 0.002900 / 0.004328 (-0.001428) | 0.048168 / 0.004250 (0.043918) | 0.046094 / 0.037052 (0.009042) | 0.290714 / 0.258489 (0.032224) | 0.321336 / 0.293841 (0.027496) | 0.047934 / 0.128546 (-0.080612) | 0.010773 / 0.075646 (-0.064873) | 0.059439 / 0.419271 (-0.359832) | 0.033644 / 0.043533 (-0.009889) | 0.273710 / 0.255139 (0.018571) | 0.295144 / 0.283200 (0.011944) | 0.018115 / 0.141683 (-0.123568) | 1.150302 / 1.452155 (-0.301853) | 1.197304 / 1.492716 (-0.295412) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090262 / 0.018006 (0.072255) | 0.300727 / 0.000490 (0.300238) | 0.000228 / 0.000200 (0.000028) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022706 / 0.037411 (-0.014706) | 0.077420 / 0.014526 (0.062894) | 0.089119 / 0.176557 (-0.087437) | 0.126760 / 0.737135 (-0.610375) | 0.090702 / 0.296338 (-0.205637) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296558 / 0.215209 (0.081349) | 2.865311 / 2.077655 (0.787656) | 1.587355 / 1.504120 (0.083235) | 1.491660 / 1.541195 (-0.049534) | 1.513604 / 1.468490 (0.045114) | 0.565209 / 4.584777 (-4.019568) | 2.450648 / 3.745712 (-1.295064) | 2.709941 / 5.269862 (-2.559921) | 1.775032 / 4.565676 (-2.790645) | 0.063767 / 0.424275 (-0.360508) | 0.005047 / 0.007607 (-0.002560) | 0.347406 / 0.226044 (0.121361) | 3.416671 / 2.268929 (1.147743) | 1.949653 / 55.444624 (-53.494971) | 1.669885 / 6.876477 (-5.206592) | 1.848125 / 2.142072 (-0.293947) | 0.648179 / 4.805227 (-4.157048) | 0.116374 / 6.500664 (-6.384290) | 0.041816 / 0.075469 (-0.033653) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.007009 / 1.841788 (-0.834779) | 12.749964 / 8.074308 (4.675656) | 10.765890 / 10.191392 (0.574498) | 0.141743 / 0.680424 (-0.538681) | 0.016077 / 0.534201 (-0.518124) | 0.293275 / 0.579283 (-0.286008) | 0.277064 / 0.434364 (-0.157300) | 0.327039 / 0.540337 (-0.213299) | 0.421784 / 1.386936 (-0.965152) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f807cd4c733a3616011a3f7f53a9fa56f7d5f685 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6683/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6683 | https://github.com/huggingface/datasets/pull/6683 | true |
2,142,000,800 | https://api.github.com/repos/huggingface/datasets/issues/6682/labels{/name} | Update GitHub Actions to Node 20.
Fix #6679. | 2024-02-28T07:02:40Z | 6,682 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-19T10:10:50Z | https://api.github.com/repos/huggingface/datasets/issues/6682/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6682/timeline | Update GitHub Actions to Node 20 | https://api.github.com/repos/huggingface/datasets/issues/6682/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | null | MEMBER | 2024-02-28T06:56:34Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6682.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6682",
"merged_at": "2024-02-28T06:56:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6682.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6682"
} | PR_kwDODunzps5nRME6 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6682). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005292 / 0.011353 (-0.006060) | 0.003354 / 0.011008 (-0.007654) | 0.063150 / 0.038508 (0.024642) | 0.028616 / 0.023109 (0.005507) | 0.242267 / 0.275898 (-0.033631) | 0.267305 / 0.323480 (-0.056175) | 0.003041 / 0.007986 (-0.004944) | 0.003346 / 0.004328 (-0.000982) | 0.048268 / 0.004250 (0.044018) | 0.042070 / 0.037052 (0.005018) | 0.256526 / 0.258489 (-0.001963) | 0.279744 / 0.293841 (-0.014097) | 0.027862 / 0.128546 (-0.100684) | 0.010786 / 0.075646 (-0.064861) | 0.206998 / 0.419271 (-0.212273) | 0.035503 / 0.043533 (-0.008030) | 0.248454 / 0.255139 (-0.006685) | 0.265639 / 0.283200 (-0.017561) | 0.019590 / 0.141683 (-0.122093) | 1.134445 / 1.452155 (-0.317709) | 1.194956 / 1.492716 (-0.297761) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090987 / 0.018006 (0.072981) | 0.301907 / 0.000490 (0.301418) | 0.000213 / 0.000200 (0.000013) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018324 / 0.037411 (-0.019088) | 0.061492 / 0.014526 (0.046966) | 0.074166 / 0.176557 (-0.102391) | 0.119990 / 0.737135 (-0.617145) | 0.074554 / 0.296338 (-0.221785) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279646 / 0.215209 (0.064437) | 2.773819 / 2.077655 (0.696164) | 1.436460 / 1.504120 (-0.067660) | 1.310303 / 1.541195 (-0.230892) | 1.315328 / 1.468490 (-0.153162) | 0.558328 / 4.584777 (-4.026449) | 2.383819 / 3.745712 (-1.361893) | 2.735034 / 5.269862 (-2.534827) | 1.724413 / 4.565676 (-2.841263) | 0.061476 / 0.424275 (-0.362799) | 0.004899 / 0.007607 (-0.002708) | 0.333195 / 0.226044 (0.107151) | 3.228900 / 2.268929 (0.959971) | 1.787315 / 55.444624 (-53.657309) | 1.526949 / 6.876477 (-5.349527) | 1.539816 / 2.142072 (-0.602257) | 0.636926 / 4.805227 (-4.168302) | 0.117533 / 6.500664 (-6.383131) | 0.041859 / 0.075469 (-0.033610) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.964637 / 1.841788 (-0.877151) | 11.296021 / 8.074308 (3.221713) | 9.375436 / 10.191392 (-0.815956) | 0.140330 / 0.680424 (-0.540094) | 0.013638 / 0.534201 (-0.520563) | 0.287046 / 0.579283 (-0.292237) | 0.265054 / 0.434364 (-0.169310) | 0.331548 / 0.540337 (-0.208790) | 0.438418 / 1.386936 (-0.948518) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005284 / 0.011353 (-0.006069) | 0.003853 / 0.011008 (-0.007155) | 0.049301 / 0.038508 (0.010793) | 0.030477 / 0.023109 (0.007368) | 0.278507 / 0.275898 (0.002609) | 0.298245 / 0.323480 (-0.025235) | 0.004225 / 0.007986 (-0.003761) | 0.002736 / 0.004328 (-0.001593) | 0.049345 / 0.004250 (0.045094) | 0.045141 / 0.037052 (0.008088) | 0.290992 / 0.258489 (0.032503) | 0.317430 / 0.293841 (0.023589) | 0.029623 / 0.128546 (-0.098924) | 0.010351 / 0.075646 (-0.065295) | 0.058027 / 0.419271 (-0.361244) | 0.051306 / 0.043533 (0.007773) | 0.279947 / 0.255139 (0.024808) | 0.296916 / 0.283200 (0.013717) | 0.018859 / 0.141683 (-0.122823) | 1.153484 / 1.452155 (-0.298670) | 1.189141 / 1.492716 (-0.303575) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091030 / 0.018006 (0.073024) | 0.301305 / 0.000490 (0.300815) | 0.000230 / 0.000200 (0.000030) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021801 / 0.037411 (-0.015611) | 0.075162 / 0.014526 (0.060636) | 0.086455 / 0.176557 (-0.090102) | 0.125431 / 0.737135 (-0.611705) | 0.087797 / 0.296338 (-0.208542) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295950 / 0.215209 (0.080741) | 2.895839 / 2.077655 (0.818184) | 1.603121 / 1.504120 (0.099001) | 1.482162 / 1.541195 (-0.059033) | 1.474231 / 1.468490 (0.005741) | 0.571370 / 4.584777 (-4.013407) | 2.466864 / 3.745712 (-1.278848) | 2.607279 / 5.269862 (-2.662582) | 1.723106 / 4.565676 (-2.842571) | 0.062068 / 0.424275 (-0.362208) | 0.004958 / 0.007607 (-0.002649) | 0.345213 / 0.226044 (0.119168) | 3.403916 / 2.268929 (1.134987) | 1.935538 / 55.444624 (-53.509086) | 1.658930 / 6.876477 (-5.217547) | 1.767611 / 2.142072 (-0.374461) | 0.645780 / 4.805227 (-4.159447) | 0.116077 / 6.500664 (-6.384587) | 0.040774 / 0.075469 (-0.034695) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.025952 / 1.841788 (-0.815836) | 11.935970 / 8.074308 (3.861662) | 9.935799 / 10.191392 (-0.255593) | 0.131081 / 0.680424 (-0.549343) | 0.016010 / 0.534201 (-0.518191) | 0.285476 / 0.579283 (-0.293807) | 0.274928 / 0.434364 (-0.159435) | 0.325788 / 0.540337 (-0.214550) | 0.421666 / 1.386936 (-0.965270) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7a79064b7a2255c0d6950dc998509ecefb893689 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6682/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6682 | https://github.com/huggingface/datasets/pull/6682 | true |
2,141,985,239 | https://api.github.com/repos/huggingface/datasets/issues/6681/labels{/name} | Update release instructions. | 2024-02-28T07:23:49Z | 6,681 | null | https://api.github.com/repos/huggingface/datasets | false | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | 2024-02-19T10:03:08Z | https://api.github.com/repos/huggingface/datasets/issues/6681/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6681/timeline | Update release instructions | https://api.github.com/repos/huggingface/datasets/issues/6681/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | null | MEMBER | 2024-02-28T07:17:22Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6681.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6681",
"merged_at": "2024-02-28T07:17:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6681.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6681"
} | PR_kwDODunzps5nRItQ | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6681). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005410 / 0.011353 (-0.005943) | 0.003862 / 0.011008 (-0.007146) | 0.063457 / 0.038508 (0.024949) | 0.030081 / 0.023109 (0.006972) | 0.250657 / 0.275898 (-0.025241) | 0.275483 / 0.323480 (-0.047997) | 0.004048 / 0.007986 (-0.003938) | 0.002818 / 0.004328 (-0.001511) | 0.048940 / 0.004250 (0.044689) | 0.043397 / 0.037052 (0.006345) | 0.262160 / 0.258489 (0.003671) | 0.294154 / 0.293841 (0.000313) | 0.030028 / 0.128546 (-0.098519) | 0.010789 / 0.075646 (-0.064857) | 0.209665 / 0.419271 (-0.209606) | 0.035297 / 0.043533 (-0.008236) | 0.253169 / 0.255139 (-0.001970) | 0.271775 / 0.283200 (-0.011424) | 0.018332 / 0.141683 (-0.123351) | 1.152420 / 1.452155 (-0.299735) | 1.262767 / 1.492716 (-0.229949) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089990 / 0.018006 (0.071984) | 0.298552 / 0.000490 (0.298062) | 0.000217 / 0.000200 (0.000017) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018414 / 0.037411 (-0.018997) | 0.061566 / 0.014526 (0.047040) | 0.075360 / 0.176557 (-0.101196) | 0.123470 / 0.737135 (-0.613665) | 0.075215 / 0.296338 (-0.221124) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279563 / 0.215209 (0.064354) | 2.725212 / 2.077655 (0.647557) | 1.446413 / 1.504120 (-0.057707) | 1.321665 / 1.541195 (-0.219529) | 1.352475 / 1.468490 (-0.116015) | 0.568440 / 4.584777 (-4.016337) | 2.393217 / 3.745712 (-1.352495) | 2.793150 / 5.269862 (-2.476711) | 1.764316 / 4.565676 (-2.801360) | 0.063157 / 0.424275 (-0.361118) | 0.005117 / 0.007607 (-0.002491) | 0.333310 / 0.226044 (0.107265) | 3.291000 / 2.268929 (1.022071) | 1.824654 / 55.444624 (-53.619971) | 1.558681 / 6.876477 (-5.317795) | 1.580558 / 2.142072 (-0.561514) | 0.649831 / 4.805227 (-4.155396) | 0.118674 / 6.500664 (-6.381990) | 0.042247 / 0.075469 (-0.033222) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976552 / 1.841788 (-0.865236) | 11.847361 / 8.074308 (3.773053) | 9.490786 / 10.191392 (-0.700606) | 0.141643 / 0.680424 (-0.538781) | 0.013653 / 0.534201 (-0.520548) | 0.284345 / 0.579283 (-0.294938) | 0.268314 / 0.434364 (-0.166050) | 0.339586 / 0.540337 (-0.200751) | 0.445239 / 1.386936 (-0.941697) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005754 / 0.011353 (-0.005599) | 0.004038 / 0.011008 (-0.006970) | 0.050027 / 0.038508 (0.011519) | 0.033598 / 0.023109 (0.010488) | 0.286514 / 0.275898 (0.010616) | 0.302493 / 0.323480 (-0.020986) | 0.004254 / 0.007986 (-0.003731) | 0.002827 / 0.004328 (-0.001502) | 0.050433 / 0.004250 (0.046182) | 0.046106 / 0.037052 (0.009054) | 0.301522 / 0.258489 (0.043033) | 0.325784 / 0.293841 (0.031943) | 0.030014 / 0.128546 (-0.098532) | 0.010891 / 0.075646 (-0.064756) | 0.059899 / 0.419271 (-0.359373) | 0.057252 / 0.043533 (0.013719) | 0.280276 / 0.255139 (0.025137) | 0.295632 / 0.283200 (0.012433) | 0.019060 / 0.141683 (-0.122622) | 1.141423 / 1.452155 (-0.310731) | 1.226960 / 1.492716 (-0.265757) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091919 / 0.018006 (0.073913) | 0.300769 / 0.000490 (0.300279) | 0.000220 / 0.000200 (0.000020) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022467 / 0.037411 (-0.014945) | 0.075342 / 0.014526 (0.060816) | 0.087988 / 0.176557 (-0.088569) | 0.128304 / 0.737135 (-0.608831) | 0.089058 / 0.296338 (-0.207280) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294662 / 0.215209 (0.079453) | 2.887743 / 2.077655 (0.810088) | 1.591756 / 1.504120 (0.087636) | 1.469249 / 1.541195 (-0.071945) | 1.495639 / 1.468490 (0.027149) | 0.575507 / 4.584777 (-4.009270) | 2.449674 / 3.745712 (-1.296038) | 2.737217 / 5.269862 (-2.532645) | 1.783066 / 4.565676 (-2.782610) | 0.063388 / 0.424275 (-0.360887) | 0.005044 / 0.007607 (-0.002563) | 0.344807 / 0.226044 (0.118763) | 3.410845 / 2.268929 (1.141916) | 1.967452 / 55.444624 (-53.477173) | 1.699884 / 6.876477 (-5.176593) | 1.862466 / 2.142072 (-0.279607) | 0.663714 / 4.805227 (-4.141513) | 0.118356 / 6.500664 (-6.382308) | 0.041176 / 0.075469 (-0.034293) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.013523 / 1.841788 (-0.828264) | 12.498866 / 8.074308 (4.424558) | 10.382595 / 10.191392 (0.191203) | 0.141757 / 0.680424 (-0.538667) | 0.015992 / 0.534201 (-0.518209) | 0.295639 / 0.579283 (-0.283644) | 0.278382 / 0.434364 (-0.155982) | 0.330351 / 0.540337 (-0.209986) | 0.431293 / 1.386936 (-0.955643) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cf1aaa32eddd73076cf6600125661df4a32cb20a \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6681/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6681 | https://github.com/huggingface/datasets/pull/6681 | true |
2,141,979,527 | https://api.github.com/repos/huggingface/datasets/issues/6680/labels{/name} | null | 2024-02-19T10:06:43Z | 6,680 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-19T10:00:31Z | https://api.github.com/repos/huggingface/datasets/issues/6680/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6680/timeline | Set dev version | https://api.github.com/repos/huggingface/datasets/issues/6680/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | null | MEMBER | 2024-02-19T10:00:40Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6680.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6680",
"merged_at": "2024-02-19T10:00:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6680.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6680"
} | PR_kwDODunzps5nRHcz | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6680). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004981 / 0.011353 (-0.006372) | 0.003030 / 0.011008 (-0.007978) | 0.059862 / 0.038508 (0.021354) | 0.030595 / 0.023109 (0.007486) | 0.262638 / 0.275898 (-0.013260) | 0.276287 / 0.323480 (-0.047193) | 0.003955 / 0.007986 (-0.004030) | 0.002667 / 0.004328 (-0.001661) | 0.047827 / 0.004250 (0.043576) | 0.041170 / 0.037052 (0.004118) | 0.252494 / 0.258489 (-0.005995) | 0.277493 / 0.293841 (-0.016348) | 0.027269 / 0.128546 (-0.101277) | 0.010380 / 0.075646 (-0.065266) | 0.204404 / 0.419271 (-0.214867) | 0.035251 / 0.043533 (-0.008282) | 0.244368 / 0.255139 (-0.010771) | 0.258003 / 0.283200 (-0.025197) | 0.016751 / 0.141683 (-0.124932) | 1.134108 / 1.452155 (-0.318047) | 1.159969 / 1.492716 (-0.332748) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.087011 / 0.018006 (0.069004) | 0.295577 / 0.000490 (0.295087) | 0.000213 / 0.000200 (0.000013) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017993 / 0.037411 (-0.019419) | 0.061690 / 0.014526 (0.047164) | 0.071791 / 0.176557 (-0.104765) | 0.118282 / 0.737135 (-0.618853) | 0.073453 / 0.296338 (-0.222885) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284764 / 0.215209 (0.069555) | 2.771791 / 2.077655 (0.694136) | 1.469614 / 1.504120 (-0.034506) | 1.334096 / 1.541195 (-0.207099) | 1.339995 / 1.468490 (-0.128495) | 0.562740 / 4.584777 (-4.022037) | 2.390219 / 3.745712 (-1.355493) | 2.679776 / 5.269862 (-2.590086) | 1.684397 / 4.565676 (-2.881279) | 0.062137 / 0.424275 (-0.362138) | 0.004934 / 0.007607 (-0.002673) | 0.336257 / 0.226044 (0.110212) | 3.256330 / 2.268929 (0.987401) | 1.801520 / 55.444624 (-53.643105) | 1.520662 / 6.876477 (-5.355815) | 1.537023 / 2.142072 (-0.605049) | 0.644360 / 4.805227 (-4.160867) | 0.115603 / 6.500664 (-6.385061) | 0.040601 / 0.075469 (-0.034868) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982992 / 1.841788 (-0.858796) | 11.002182 / 8.074308 (2.927873) | 9.564671 / 10.191392 (-0.626721) | 0.137682 / 0.680424 (-0.542742) | 0.013936 / 0.534201 (-0.520265) | 0.285898 / 0.579283 (-0.293385) | 0.264426 / 0.434364 (-0.169938) | 0.321615 / 0.540337 (-0.218723) | 0.420216 / 1.386936 (-0.966720) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005239 / 0.011353 (-0.006114) | 0.003165 / 0.011008 (-0.007844) | 0.048176 / 0.038508 (0.009668) | 0.030680 / 0.023109 (0.007571) | 0.258176 / 0.275898 (-0.017722) | 0.282342 / 0.323480 (-0.041138) | 0.004218 / 0.007986 (-0.003767) | 0.002616 / 0.004328 (-0.001713) | 0.047253 / 0.004250 (0.043003) | 0.044178 / 0.037052 (0.007126) | 0.276942 / 0.258489 (0.018453) | 0.312353 / 0.293841 (0.018512) | 0.046714 / 0.128546 (-0.081832) | 0.009892 / 0.075646 (-0.065755) | 0.056123 / 0.419271 (-0.363149) | 0.032691 / 0.043533 (-0.010842) | 0.268781 / 0.255139 (0.013642) | 0.285921 / 0.283200 (0.002722) | 0.016050 / 0.141683 (-0.125633) | 1.138058 / 1.452155 (-0.314096) | 1.193405 / 1.492716 (-0.299311) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089280 / 0.018006 (0.071273) | 0.288425 / 0.000490 (0.287935) | 0.000201 / 0.000200 (0.000001) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021536 / 0.037411 (-0.015875) | 0.075157 / 0.014526 (0.060631) | 0.088943 / 0.176557 (-0.087613) | 0.125191 / 0.737135 (-0.611945) | 0.087991 / 0.296338 (-0.208348) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285103 / 0.215209 (0.069894) | 2.791798 / 2.077655 (0.714144) | 1.518104 / 1.504120 (0.013984) | 1.388690 / 1.541195 (-0.152505) | 1.409896 / 1.468490 (-0.058594) | 0.554077 / 4.584777 (-4.030700) | 2.396994 / 3.745712 (-1.348718) | 2.596801 / 5.269862 (-2.673060) | 1.683761 / 4.565676 (-2.881915) | 0.061209 / 0.424275 (-0.363066) | 0.004735 / 0.007607 (-0.002873) | 0.337566 / 0.226044 (0.111522) | 3.258183 / 2.268929 (0.989254) | 1.886185 / 55.444624 (-53.558439) | 1.599148 / 6.876477 (-5.277329) | 1.726867 / 2.142072 (-0.415206) | 0.642784 / 4.805227 (-4.162444) | 0.114947 / 6.500664 (-6.385717) | 0.040450 / 0.075469 (-0.035019) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.001316 / 1.841788 (-0.840472) | 11.695367 / 8.074308 (3.621058) | 9.854870 / 10.191392 (-0.336522) | 0.136462 / 0.680424 (-0.543961) | 0.016708 / 0.534201 (-0.517493) | 0.286421 / 0.579283 (-0.292862) | 0.270773 / 0.434364 (-0.163591) | 0.322947 / 0.540337 (-0.217390) | 0.416772 / 1.386936 (-0.970164) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6ba542847314bd349301937e59c3de04ce13aa5e \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6680/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6680 | https://github.com/huggingface/datasets/pull/6680 | true |
2,141,953,981 | https://api.github.com/repos/huggingface/datasets/issues/6679/labels{/name} | `Node.js` 16 GitHub Actions are deprecated. See: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/
We should update them to Node 20.
See warnings in our CI, e.g.: https://github.com/huggingface/datasets/actions/runs/7957295009?pr=6678
> Node.js 16 actions are deprecated. Please update the following actions to use Node.js 20: actions/checkout@v3, actions/setup-python@v4. For more information see: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/.
| 2024-02-28T06:56:35Z | 6,679 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | 2024-02-19T09:47:37Z | https://api.github.com/repos/huggingface/datasets/issues/6679/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/6679/timeline | Node.js 16 GitHub Actions are deprecated | https://api.github.com/repos/huggingface/datasets/issues/6679/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | completed | MEMBER | 2024-02-28T06:56:35Z | null | I_kwDODunzps5_q5-9 | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6679/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6679 | https://github.com/huggingface/datasets/issues/6679 | false |
2,141,902,154 | https://api.github.com/repos/huggingface/datasets/issues/6678/labels{/name} | null | 2024-02-19T10:03:00Z | 6,678 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-19T09:24:29Z | https://api.github.com/repos/huggingface/datasets/issues/6678/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6678/timeline | Release: 2.17.1 | https://api.github.com/repos/huggingface/datasets/issues/6678/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | null | MEMBER | 2024-02-19T09:56:52Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6678.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6678",
"merged_at": "2024-02-19T09:56:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6678.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6678"
} | PR_kwDODunzps5nQ2ZO | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6678). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005070 / 0.011353 (-0.006283) | 0.003685 / 0.011008 (-0.007323) | 0.063191 / 0.038508 (0.024683) | 0.030506 / 0.023109 (0.007397) | 0.258033 / 0.275898 (-0.017865) | 0.269790 / 0.323480 (-0.053690) | 0.004180 / 0.007986 (-0.003805) | 0.002811 / 0.004328 (-0.001517) | 0.048718 / 0.004250 (0.044467) | 0.043473 / 0.037052 (0.006421) | 0.267306 / 0.258489 (0.008817) | 0.290315 / 0.293841 (-0.003526) | 0.027402 / 0.128546 (-0.101144) | 0.010782 / 0.075646 (-0.064864) | 0.207243 / 0.419271 (-0.212029) | 0.035637 / 0.043533 (-0.007896) | 0.264032 / 0.255139 (0.008893) | 0.270450 / 0.283200 (-0.012749) | 0.017407 / 0.141683 (-0.124276) | 1.107481 / 1.452155 (-0.344674) | 1.163187 / 1.492716 (-0.329529) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095065 / 0.018006 (0.077059) | 0.305169 / 0.000490 (0.304680) | 0.000221 / 0.000200 (0.000021) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017706 / 0.037411 (-0.019706) | 0.061431 / 0.014526 (0.046905) | 0.073541 / 0.176557 (-0.103016) | 0.117326 / 0.737135 (-0.619809) | 0.074368 / 0.296338 (-0.221971) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284533 / 0.215209 (0.069324) | 2.775230 / 2.077655 (0.697575) | 1.455196 / 1.504120 (-0.048924) | 1.357651 / 1.541195 (-0.183544) | 1.337477 / 1.468490 (-0.131013) | 0.567439 / 4.584777 (-4.017338) | 2.380612 / 3.745712 (-1.365100) | 2.792305 / 5.269862 (-2.477556) | 1.726501 / 4.565676 (-2.839176) | 0.061729 / 0.424275 (-0.362546) | 0.004928 / 0.007607 (-0.002679) | 0.331989 / 0.226044 (0.105944) | 3.301704 / 2.268929 (1.032776) | 1.805107 / 55.444624 (-53.639518) | 1.500434 / 6.876477 (-5.376043) | 1.535548 / 2.142072 (-0.606524) | 0.639490 / 4.805227 (-4.165737) | 0.115876 / 6.500664 (-6.384788) | 0.041895 / 0.075469 (-0.033574) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.993584 / 1.841788 (-0.848203) | 11.596680 / 8.074308 (3.522371) | 9.631726 / 10.191392 (-0.559666) | 0.141153 / 0.680424 (-0.539271) | 0.014077 / 0.534201 (-0.520124) | 0.288237 / 0.579283 (-0.291046) | 0.261213 / 0.434364 (-0.173151) | 0.323897 / 0.540337 (-0.216441) | 0.420350 / 1.386936 (-0.966586) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005275 / 0.011353 (-0.006078) | 0.003739 / 0.011008 (-0.007269) | 0.049801 / 0.038508 (0.011293) | 0.030544 / 0.023109 (0.007435) | 0.264835 / 0.275898 (-0.011063) | 0.297738 / 0.323480 (-0.025742) | 0.004487 / 0.007986 (-0.003499) | 0.002835 / 0.004328 (-0.001493) | 0.048091 / 0.004250 (0.043841) | 0.044375 / 0.037052 (0.007322) | 0.286538 / 0.258489 (0.028049) | 0.319561 / 0.293841 (0.025720) | 0.047925 / 0.128546 (-0.080621) | 0.010816 / 0.075646 (-0.064831) | 0.057940 / 0.419271 (-0.361331) | 0.033588 / 0.043533 (-0.009945) | 0.270075 / 0.255139 (0.014936) | 0.290441 / 0.283200 (0.007242) | 0.017173 / 0.141683 (-0.124509) | 1.164686 / 1.452155 (-0.287469) | 1.213205 / 1.492716 (-0.279511) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093408 / 0.018006 (0.075402) | 0.305525 / 0.000490 (0.305036) | 0.000235 / 0.000200 (0.000035) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021605 / 0.037411 (-0.015806) | 0.075479 / 0.014526 (0.060953) | 0.085990 / 0.176557 (-0.090567) | 0.124783 / 0.737135 (-0.612352) | 0.089108 / 0.296338 (-0.207230) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.306222 / 0.215209 (0.091013) | 2.987282 / 2.077655 (0.909627) | 1.664714 / 1.504120 (0.160594) | 1.523136 / 1.541195 (-0.018059) | 1.534112 / 1.468490 (0.065622) | 0.566347 / 4.584777 (-4.018430) | 2.438641 / 3.745712 (-1.307071) | 2.669048 / 5.269862 (-2.600814) | 1.732935 / 4.565676 (-2.832741) | 0.063460 / 0.424275 (-0.360815) | 0.004973 / 0.007607 (-0.002634) | 0.366233 / 0.226044 (0.140189) | 3.553578 / 2.268929 (1.284649) | 1.984343 / 55.444624 (-53.460281) | 1.711038 / 6.876477 (-5.165439) | 1.857346 / 2.142072 (-0.284726) | 0.651077 / 4.805227 (-4.154150) | 0.118670 / 6.500664 (-6.381994) | 0.041839 / 0.075469 (-0.033631) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.008230 / 1.841788 (-0.833558) | 12.047403 / 8.074308 (3.973095) | 10.039053 / 10.191392 (-0.152339) | 0.141640 / 0.680424 (-0.538784) | 0.014758 / 0.534201 (-0.519443) | 0.285016 / 0.579283 (-0.294267) | 0.275461 / 0.434364 (-0.158903) | 0.325535 / 0.540337 (-0.214803) | 0.415871 / 1.386936 (-0.971065) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5d2268261bf0fb3eed8faae6bc1fa20a25b4382c \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6678/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6678 | https://github.com/huggingface/datasets/pull/6678 | true |
2,141,244,167 | https://api.github.com/repos/huggingface/datasets/issues/6677/labels{/name} | If cache directory is set, information is not passed through.
Pass download config in as an arg too. | 2024-02-28T18:57:39Z | 6,677 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-18T23:48:57Z | https://api.github.com/repos/huggingface/datasets/issues/6677/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6677/timeline | Pass through information about location of cache directory. | https://api.github.com/repos/huggingface/datasets/issues/6677/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/94808782?v=4",
"events_url": "https://api.github.com/users/stridge-cruxml/events{/privacy}",
"followers_url": "https://api.github.com/users/stridge-cruxml/followers",
"following_url": "https://api.github.com/users/stridge-cruxml/following{/other_user}",
"gists_url": "https://api.github.com/users/stridge-cruxml/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stridge-cruxml",
"id": 94808782,
"login": "stridge-cruxml",
"node_id": "U_kgDOBaaqzg",
"organizations_url": "https://api.github.com/users/stridge-cruxml/orgs",
"received_events_url": "https://api.github.com/users/stridge-cruxml/received_events",
"repos_url": "https://api.github.com/users/stridge-cruxml/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stridge-cruxml/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stridge-cruxml/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stridge-cruxml"
} | [] | null | null | CONTRIBUTOR | 2024-02-28T18:51:15Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6677.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6677",
"merged_at": "2024-02-28T18:51:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6677.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6677"
} | PR_kwDODunzps5nOmo_ | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6677). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007162 / 0.011353 (-0.004191) | 0.004125 / 0.011008 (-0.006883) | 0.064011 / 0.038508 (0.025503) | 0.031795 / 0.023109 (0.008686) | 0.248761 / 0.275898 (-0.027137) | 0.275130 / 0.323480 (-0.048350) | 0.003138 / 0.007986 (-0.004847) | 0.002736 / 0.004328 (-0.001592) | 0.050515 / 0.004250 (0.046264) | 0.044787 / 0.037052 (0.007735) | 0.261997 / 0.258489 (0.003507) | 0.292170 / 0.293841 (-0.001671) | 0.028122 / 0.128546 (-0.100424) | 0.010780 / 0.075646 (-0.064866) | 0.208805 / 0.419271 (-0.210467) | 0.036362 / 0.043533 (-0.007171) | 0.251599 / 0.255139 (-0.003540) | 0.271200 / 0.283200 (-0.012000) | 0.020215 / 0.141683 (-0.121468) | 1.133352 / 1.452155 (-0.318803) | 1.185240 / 1.492716 (-0.307477) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089990 / 0.018006 (0.071984) | 0.298099 / 0.000490 (0.297609) | 0.000221 / 0.000200 (0.000021) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018432 / 0.037411 (-0.018980) | 0.062641 / 0.014526 (0.048115) | 0.075210 / 0.176557 (-0.101346) | 0.122239 / 0.737135 (-0.614897) | 0.078914 / 0.296338 (-0.217424) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287682 / 0.215209 (0.072473) | 2.815030 / 2.077655 (0.737375) | 1.499512 / 1.504120 (-0.004607) | 1.370210 / 1.541195 (-0.170985) | 1.381944 / 1.468490 (-0.086546) | 0.571645 / 4.584777 (-4.013132) | 2.377773 / 3.745712 (-1.367939) | 2.757206 / 5.269862 (-2.512655) | 1.717159 / 4.565676 (-2.848518) | 0.063038 / 0.424275 (-0.361237) | 0.004913 / 0.007607 (-0.002694) | 0.340854 / 0.226044 (0.114810) | 3.348087 / 2.268929 (1.079159) | 1.843123 / 55.444624 (-53.601502) | 1.569714 / 6.876477 (-5.306763) | 1.593791 / 2.142072 (-0.548281) | 0.642865 / 4.805227 (-4.162362) | 0.116933 / 6.500664 (-6.383731) | 0.041891 / 0.075469 (-0.033578) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976453 / 1.841788 (-0.865334) | 12.229986 / 8.074308 (4.155678) | 9.617912 / 10.191392 (-0.573480) | 0.141292 / 0.680424 (-0.539132) | 0.013732 / 0.534201 (-0.520469) | 0.291424 / 0.579283 (-0.287859) | 0.264748 / 0.434364 (-0.169616) | 0.345262 / 0.540337 (-0.195075) | 0.445126 / 1.386936 (-0.941810) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005286 / 0.011353 (-0.006067) | 0.003749 / 0.011008 (-0.007259) | 0.049070 / 0.038508 (0.010562) | 0.031779 / 0.023109 (0.008670) | 0.275636 / 0.275898 (-0.000262) | 0.296956 / 0.323480 (-0.026524) | 0.004278 / 0.007986 (-0.003708) | 0.002702 / 0.004328 (-0.001626) | 0.049658 / 0.004250 (0.045408) | 0.046025 / 0.037052 (0.008973) | 0.293238 / 0.258489 (0.034749) | 0.316676 / 0.293841 (0.022835) | 0.029277 / 0.128546 (-0.099269) | 0.010096 / 0.075646 (-0.065550) | 0.059861 / 0.419271 (-0.359411) | 0.054310 / 0.043533 (0.010778) | 0.275025 / 0.255139 (0.019886) | 0.292995 / 0.283200 (0.009796) | 0.018448 / 0.141683 (-0.123235) | 1.150805 / 1.452155 (-0.301350) | 1.178310 / 1.492716 (-0.314406) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092644 / 0.018006 (0.074638) | 0.297979 / 0.000490 (0.297489) | 0.000207 / 0.000200 (0.000007) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021758 / 0.037411 (-0.015654) | 0.076734 / 0.014526 (0.062208) | 0.088522 / 0.176557 (-0.088034) | 0.126190 / 0.737135 (-0.610945) | 0.090466 / 0.296338 (-0.205873) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.305355 / 0.215209 (0.090146) | 2.978927 / 2.077655 (0.901272) | 1.612312 / 1.504120 (0.108192) | 1.485829 / 1.541195 (-0.055366) | 1.513303 / 1.468490 (0.044813) | 0.592368 / 4.584777 (-3.992409) | 2.448529 / 3.745712 (-1.297183) | 2.713460 / 5.269862 (-2.556402) | 1.803859 / 4.565676 (-2.761817) | 0.065630 / 0.424275 (-0.358645) | 0.005072 / 0.007607 (-0.002535) | 0.358340 / 0.226044 (0.132295) | 3.528516 / 2.268929 (1.259588) | 1.977901 / 55.444624 (-53.466723) | 1.692526 / 6.876477 (-5.183950) | 1.858405 / 2.142072 (-0.283668) | 0.676169 / 4.805227 (-4.129059) | 0.121136 / 6.500664 (-6.379528) | 0.041384 / 0.075469 (-0.034085) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.011801 / 1.841788 (-0.829987) | 12.496459 / 8.074308 (4.422151) | 10.465659 / 10.191392 (0.274267) | 0.154121 / 0.680424 (-0.526302) | 0.016796 / 0.534201 (-0.517405) | 0.288908 / 0.579283 (-0.290376) | 0.274328 / 0.434364 (-0.160036) | 0.322366 / 0.540337 (-0.217971) | 0.423498 / 1.386936 (-0.963438) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#52b9273b5ddbcadfdb512a693bc813b21e863b1b \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6677/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6677 | https://github.com/huggingface/datasets/pull/6677 | true |
2,140,648,619 | https://api.github.com/repos/huggingface/datasets/issues/6676/labels{/name} | ### Describe the bug
Trying to read a bunch of JSON files into Dataset class but default approach doesn't work. I don't get why it works when I read it one by one but not when I pass as a list :man_shrugging:
The code fails with
```
ArrowInvalid: JSON parse error: Invalid value. in row 0
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
This doesn't work
```
from datasets import Dataset
# dir contains 100 json files.
Dataset.from_json("/PUT SOME PATH HERE/*")
```
This works:
```
from datasets import concatenate_datasets
ls_ds = []
for file in list_of_json_files:
ls_ds.append(Dataset.from_json(file))
ds = concatenate_datasets(ls_ds)
```
### Expected behavior
I expect this to read json files properly as error is not clear
### Environment info
- `datasets` version: 2.17.0
- Platform: Linux-6.5.0-15-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.2
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0
| 2024-03-02T20:47:22Z | 6,676 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-02-17T22:58:15Z | https://api.github.com/repos/huggingface/datasets/issues/6676/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6676/timeline | Can't Read List of JSON Files Properly | https://api.github.com/repos/huggingface/datasets/issues/6676/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/20232088?v=4",
"events_url": "https://api.github.com/users/lordsoffallen/events{/privacy}",
"followers_url": "https://api.github.com/users/lordsoffallen/followers",
"following_url": "https://api.github.com/users/lordsoffallen/following{/other_user}",
"gists_url": "https://api.github.com/users/lordsoffallen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lordsoffallen",
"id": 20232088,
"login": "lordsoffallen",
"node_id": "MDQ6VXNlcjIwMjMyMDg4",
"organizations_url": "https://api.github.com/users/lordsoffallen/orgs",
"received_events_url": "https://api.github.com/users/lordsoffallen/received_events",
"repos_url": "https://api.github.com/users/lordsoffallen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lordsoffallen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lordsoffallen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lordsoffallen"
} | [] | null | null | NONE | null | null | I_kwDODunzps5_l7Sr | [
"Found the issue, if there are other files in the directory, it gets caught into this `*` so essentially it should be `*.json`. Could we possibly to check for list of files to make sure the pattern matches json files and raise error if not?",
"I don't think we should filter for `*.json` as this might silently remove desired files for many users. And this could be a major breaking change for many organizations.\r\n\r\nYou could do the globbing yourself which would keep the code clean.\r\n\r\n```python\r\nfrom glob import glob\r\n\r\nDataset.from_json(glob('folder/*.json'))\r\n```",
"I think it should still be fine to log a warning message in case the folder contains different files? I also don't get why would this be breaking as in the end using `from_FILE_TYPE` should be able to read a specific file type only. Maybe some other use case I am not aware of but since globbing or this case not mentioned anywhere in the doc, I spent quite a bit of time trying to figure out where the issue was. Just making sure it's clear for users."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6676/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6676 | https://github.com/huggingface/datasets/issues/6676 | false |
2,139,640,381 | https://api.github.com/repos/huggingface/datasets/issues/6675/labels{/name} | ### Feature request
Typical torchvision / torch Datasets in image applications apply color conversion in the Dataset portion of the code as part of image decode, separately from the image transform stack. This is true for PIL.Image where convert is usually called in dataset, for native torchvision https://pytorch.org/vision/main/generated/torchvision.io.decode_jpeg.html, and similarly in tensorflow.data pipelines decode_jpeg or https://www.tensorflow.org/api_docs/python/tf/io/decode_and_crop_jpeg have a channels arg that allows controlling the image mode in the decode step.
datasets currently requires this pattern (from [examples](https://huggingface.co/docs/datasets/main/en/image_process)):
```
from torchvision.transforms import Compose, ColorJitter, ToTensor
jitter = Compose(
[
ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.7),
ToTensor(),
]
)
def transforms(examples):
examples["pixel_values"] = [jitter(image.convert("RGB")) for image in examples["image"]]
return examples
```
### Motivation
It would be nice to be able to handle `image.convert("RGB")` (or other modes) in the decode step, before applying torchvision transforms, this would reduce differences in code when handling pipelines that can handle torchvision, webdatset, or hf datasets with fewer code differences and without needing to handle image mode argument passing in two different stages of the pipelines...
### Your contribution
Can do a PR with guidance on how mode should be passed / set on the dataset. | 2024-03-18T15:41:34Z | 6,675 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-02-16T23:43:20Z | https://api.github.com/repos/huggingface/datasets/issues/6675/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6675/timeline | Allow image model (color conversion) to be specified as part of datasets Image() decode | https://api.github.com/repos/huggingface/datasets/issues/6675/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/5702664?v=4",
"events_url": "https://api.github.com/users/rwightman/events{/privacy}",
"followers_url": "https://api.github.com/users/rwightman/followers",
"following_url": "https://api.github.com/users/rwightman/following{/other_user}",
"gists_url": "https://api.github.com/users/rwightman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rwightman",
"id": 5702664,
"login": "rwightman",
"node_id": "MDQ6VXNlcjU3MDI2NjQ=",
"organizations_url": "https://api.github.com/users/rwightman/orgs",
"received_events_url": "https://api.github.com/users/rwightman/received_events",
"repos_url": "https://api.github.com/users/rwightman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rwightman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rwightman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rwightman"
} | [] | null | completed | NONE | 2024-03-18T15:41:34Z | null | I_kwDODunzps5_iFI9 | [
"It would be a great addition indeed :)\r\n\r\nThis can be implemented the same way we have `sampling_rate` for Audio(): we just add a new parameter to the Image() type and take this parameter into account in `Image.decode_example`\r\n\r\nEDIT: adding an example of how it can be used:\r\n\r\n```python\r\nds = ds.cast_column(\"image\", Image(mode=...))\r\n```"
] | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6675/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6675 | https://github.com/huggingface/datasets/issues/6675 | false |
2,139,595,576 | https://api.github.com/repos/huggingface/datasets/issues/6674/labels{/name} | ### Describe the bug
For the dreprecated notebook found [here](https://github.com/huggingface/datasets/blob/main/notebooks/Overview.ipynb). The link to the new notebook is broken.
### Steps to reproduce the bug
Click the [Quickstart notebook](https://github.com/huggingface/notebooks/blob/main/datasets_doc/quickstart.ipynb) link in the notebook.
### Expected behavior
I believe is it suposed to link [here](https://github.com/huggingface/notebooks/blob/main/datasets_doc/en/quickstart.ipynb). That is mentioned in the readme.
### Environment info
Colab | 2024-02-25T18:48:09Z | 6,674 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-02-16T22:51:35Z | https://api.github.com/repos/huggingface/datasets/issues/6674/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6674/timeline | Depprcated Overview.ipynb Link to new Quickstart Notebook invalid | https://api.github.com/repos/huggingface/datasets/issues/6674/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/55932554?v=4",
"events_url": "https://api.github.com/users/Codeblockz/events{/privacy}",
"followers_url": "https://api.github.com/users/Codeblockz/followers",
"following_url": "https://api.github.com/users/Codeblockz/following{/other_user}",
"gists_url": "https://api.github.com/users/Codeblockz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Codeblockz",
"id": 55932554,
"login": "Codeblockz",
"node_id": "MDQ6VXNlcjU1OTMyNTU0",
"organizations_url": "https://api.github.com/users/Codeblockz/orgs",
"received_events_url": "https://api.github.com/users/Codeblockz/received_events",
"repos_url": "https://api.github.com/users/Codeblockz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Codeblockz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Codeblockz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Codeblockz"
} | [] | null | completed | CONTRIBUTOR | 2024-02-25T18:48:09Z | null | I_kwDODunzps5_h6M4 | [
"Good catch! Feel free to open a PR to fix the link."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6674/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6674 | https://github.com/huggingface/datasets/issues/6674 | false |
2,139,522,827 | https://api.github.com/repos/huggingface/datasets/issues/6673/labels{/name} | ### Describe the bug
When persistent workers are enabled, the epoch that's set via the IterableDataset instance held by the training process is ignored by the workers as they are disconnected across processes.
PyTorch samplers for non-iterable datasets have a mechanism to sync this, datasets.IterableDataset does not.
In my own use of IterableDatasets I usually track the epoch count which crosses process boundaries in a multiprocessing.Value
### Steps to reproduce the bug
Use a streaming dataset (Iterable) w/ the recommended pattern below and `persistent_workers=True` in the torch DataLoader.
```
for epoch in range(epochs):
shuffled_dataset.set_epoch(epoch)
for example in shuffled_dataset:
...
```
### Expected behavior
When the canonical bit of code above is used with `num_workers > 0` and `persistent_workers=True`, the epoch set via `set_epoch()` is propagated to the IterableDataset instances in the worker processes
### Environment info
N/A | 2024-02-22T13:17:14Z | 6,673 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] | 2024-02-16T21:38:12Z | https://api.github.com/repos/huggingface/datasets/issues/6673/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6673/timeline | IterableDataset `set_epoch` is ignored when DataLoader `persistent_workers=True` | https://api.github.com/repos/huggingface/datasets/issues/6673/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/5702664?v=4",
"events_url": "https://api.github.com/users/rwightman/events{/privacy}",
"followers_url": "https://api.github.com/users/rwightman/followers",
"following_url": "https://api.github.com/users/rwightman/following{/other_user}",
"gists_url": "https://api.github.com/users/rwightman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rwightman",
"id": 5702664,
"login": "rwightman",
"node_id": "MDQ6VXNlcjU3MDI2NjQ=",
"organizations_url": "https://api.github.com/users/rwightman/orgs",
"received_events_url": "https://api.github.com/users/rwightman/received_events",
"repos_url": "https://api.github.com/users/rwightman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rwightman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rwightman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rwightman"
} | [] | null | null | NONE | null | null | I_kwDODunzps5_hocL | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6673/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6673 | https://github.com/huggingface/datasets/issues/6673 | false |
2,138,732,288 | https://api.github.com/repos/huggingface/datasets/issues/6672/labels{/name} | Remove deprecated `verbose` parameter from CSV builder.
Note that the `verbose` parameter is deprecated since pandas 2.2.0. See:
- https://github.com/pandas-dev/pandas/pull/56556
- https://github.com/pandas-dev/pandas/pull/57450
Fix #6671. | 2024-02-19T09:26:34Z | 6,672 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-16T14:26:21Z | https://api.github.com/repos/huggingface/datasets/issues/6672/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6672/timeline | Remove deprecated verbose parameter from CSV builder | https://api.github.com/repos/huggingface/datasets/issues/6672/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | null | MEMBER | 2024-02-19T09:20:22Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6672.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6672",
"merged_at": "2024-02-19T09:20:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6672.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6672"
} | PR_kwDODunzps5nGAlw | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6672). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I am merging this PR (so that it is included in the next patch release) to remove the deprecation warning raised by the CSV builder from pandas 2.2.0.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005374 / 0.011353 (-0.005979) | 0.003833 / 0.011008 (-0.007175) | 0.063465 / 0.038508 (0.024957) | 0.029564 / 0.023109 (0.006455) | 0.252759 / 0.275898 (-0.023139) | 0.274726 / 0.323480 (-0.048754) | 0.004014 / 0.007986 (-0.003971) | 0.002754 / 0.004328 (-0.001574) | 0.049351 / 0.004250 (0.045101) | 0.041858 / 0.037052 (0.004806) | 0.269023 / 0.258489 (0.010534) | 0.290670 / 0.293841 (-0.003171) | 0.028435 / 0.128546 (-0.100111) | 0.010988 / 0.075646 (-0.064658) | 0.207447 / 0.419271 (-0.211824) | 0.035945 / 0.043533 (-0.007588) | 0.257336 / 0.255139 (0.002197) | 0.267310 / 0.283200 (-0.015890) | 0.018575 / 0.141683 (-0.123108) | 1.144515 / 1.452155 (-0.307640) | 1.214614 / 1.492716 (-0.278102) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.103527 / 0.018006 (0.085521) | 0.310607 / 0.000490 (0.310117) | 0.000216 / 0.000200 (0.000016) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018597 / 0.037411 (-0.018814) | 0.063176 / 0.014526 (0.048650) | 0.073553 / 0.176557 (-0.103003) | 0.120648 / 0.737135 (-0.616487) | 0.075625 / 0.296338 (-0.220713) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289148 / 0.215209 (0.073939) | 2.798351 / 2.077655 (0.720696) | 1.487909 / 1.504120 (-0.016211) | 1.369945 / 1.541195 (-0.171250) | 1.378889 / 1.468490 (-0.089602) | 0.569825 / 4.584777 (-4.014952) | 2.413309 / 3.745712 (-1.332403) | 2.795668 / 5.269862 (-2.474193) | 1.757748 / 4.565676 (-2.807929) | 0.064686 / 0.424275 (-0.359589) | 0.005027 / 0.007607 (-0.002580) | 0.341835 / 0.226044 (0.115791) | 3.349915 / 2.268929 (1.080987) | 1.864253 / 55.444624 (-53.580371) | 1.595788 / 6.876477 (-5.280688) | 1.666127 / 2.142072 (-0.475945) | 0.665239 / 4.805227 (-4.139989) | 0.120563 / 6.500664 (-6.380101) | 0.043649 / 0.075469 (-0.031820) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.988543 / 1.841788 (-0.853244) | 11.973275 / 8.074308 (3.898967) | 9.685401 / 10.191392 (-0.505991) | 0.141416 / 0.680424 (-0.539008) | 0.014328 / 0.534201 (-0.519873) | 0.287063 / 0.579283 (-0.292220) | 0.266284 / 0.434364 (-0.168080) | 0.324643 / 0.540337 (-0.215694) | 0.423845 / 1.386936 (-0.963091) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005430 / 0.011353 (-0.005923) | 0.003770 / 0.011008 (-0.007239) | 0.050879 / 0.038508 (0.012371) | 0.031929 / 0.023109 (0.008819) | 0.297739 / 0.275898 (0.021841) | 0.319380 / 0.323480 (-0.004100) | 0.004348 / 0.007986 (-0.003637) | 0.002783 / 0.004328 (-0.001545) | 0.050024 / 0.004250 (0.045774) | 0.045209 / 0.037052 (0.008157) | 0.307608 / 0.258489 (0.049119) | 0.338168 / 0.293841 (0.044327) | 0.051712 / 0.128546 (-0.076834) | 0.011092 / 0.075646 (-0.064554) | 0.059830 / 0.419271 (-0.359441) | 0.033894 / 0.043533 (-0.009638) | 0.295278 / 0.255139 (0.040139) | 0.310749 / 0.283200 (0.027550) | 0.018676 / 0.141683 (-0.123007) | 1.201086 / 1.452155 (-0.251069) | 1.258214 / 1.492716 (-0.234502) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094079 / 0.018006 (0.076073) | 0.304657 / 0.000490 (0.304168) | 0.000225 / 0.000200 (0.000026) | 0.000057 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021969 / 0.037411 (-0.015442) | 0.075749 / 0.014526 (0.061223) | 0.087878 / 0.176557 (-0.088679) | 0.126022 / 0.737135 (-0.611114) | 0.089466 / 0.296338 (-0.206873) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286415 / 0.215209 (0.071206) | 2.831867 / 2.077655 (0.754212) | 1.584119 / 1.504120 (0.079999) | 1.468454 / 1.541195 (-0.072740) | 1.495831 / 1.468490 (0.027341) | 0.579569 / 4.584777 (-4.005208) | 2.477248 / 3.745712 (-1.268464) | 2.830536 / 5.269862 (-2.439325) | 1.820188 / 4.565676 (-2.745488) | 0.064408 / 0.424275 (-0.359867) | 0.005156 / 0.007607 (-0.002451) | 0.342391 / 0.226044 (0.116347) | 3.424380 / 2.268929 (1.155452) | 1.993110 / 55.444624 (-53.451514) | 1.702971 / 6.876477 (-5.173506) | 1.844281 / 2.142072 (-0.297792) | 0.668208 / 4.805227 (-4.137020) | 0.120306 / 6.500664 (-6.380358) | 0.042127 / 0.075469 (-0.033342) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.019118 / 1.841788 (-0.822670) | 12.418330 / 8.074308 (4.344022) | 10.474226 / 10.191392 (0.282834) | 0.148510 / 0.680424 (-0.531914) | 0.015107 / 0.534201 (-0.519094) | 0.289488 / 0.579283 (-0.289795) | 0.278149 / 0.434364 (-0.156215) | 0.334655 / 0.540337 (-0.205682) | 0.419127 / 1.386936 (-0.967809) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#58733d2824192fc748cc8730cf77c33be5ded2ea \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6672/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6672 | https://github.com/huggingface/datasets/pull/6672 | true |
2,138,727,870 | https://api.github.com/repos/huggingface/datasets/issues/6671/labels{/name} | CSV builder raises a deprecation warning on `verbose` parameter:
```
FutureWarning: The 'verbose' keyword in pd.read_csv is deprecated and will be removed in a future version.
```
See:
- https://github.com/pandas-dev/pandas/pull/56556
- https://github.com/pandas-dev/pandas/pull/57450 | 2024-02-19T09:20:23Z | 6,671 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-02-16T14:23:46Z | https://api.github.com/repos/huggingface/datasets/issues/6671/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/6671/timeline | CSV builder raises deprecation warning on verbose parameter | https://api.github.com/repos/huggingface/datasets/issues/6671/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | completed | MEMBER | 2024-02-19T09:20:23Z | null | I_kwDODunzps5_emW- | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6671/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6671 | https://github.com/huggingface/datasets/issues/6671 | false |
2,138,372,958 | https://api.github.com/repos/huggingface/datasets/issues/6670/labels{/name} | ### Describe the bug
ValueError Traceback (most recent call last)
[<ipython-input-11-9b99bc80ec23>](https://localhost:8080/#) in <cell line: 11>()
9 import numpy as np
10 import matplotlib.pyplot as plt
---> 11 from datasets import DatasetDict, Dataset
12 from transformers import AutoTokenizer, AutoModelForSequenceClassification
13 from transformers import Trainer, TrainingArguments
5 frames
[/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module>
16 __version__ = "2.17.0"
17
---> 18 from .arrow_dataset import Dataset
19 from .arrow_reader import ReadInstruction
20 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module>
65
66 from . import config
---> 67 from .arrow_reader import ArrowReader
68 from .arrow_writer import ArrowWriter, OptimizedTypedSequence
69 from .data_files import sanitize_patterns
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py](https://localhost:8080/#) in <module>
27
28 import pyarrow as pa
---> 29 import pyarrow.parquet as pq
30 from tqdm.contrib.concurrent import thread_map
31
[/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/__init__.py](https://localhost:8080/#) in <module>
18 # flake8: noqa
19
---> 20 from .core import *
[/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/core.py](https://localhost:8080/#) in <module>
34 import pyarrow as pa
35 import pyarrow.lib as lib
---> 36 import pyarrow._parquet as _parquet
37
38 from pyarrow._parquet import (ParquetReader, Statistics, # noqa
/usr/local/lib/python3.10/dist-packages/pyarrow/_parquet.pyx in init pyarrow._parquet()
ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject
### Steps to reproduce the bug
ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject
### Expected behavior
Resolve the binary incompatibility
### Environment info
Google Colab Note book | 2024-02-17T04:26:34Z | 6,670 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-02-16T11:05:17Z | https://api.github.com/repos/huggingface/datasets/issues/6670/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6670/timeline | ValueError | https://api.github.com/repos/huggingface/datasets/issues/6670/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/112316000?v=4",
"events_url": "https://api.github.com/users/prashanth19bolukonda/events{/privacy}",
"followers_url": "https://api.github.com/users/prashanth19bolukonda/followers",
"following_url": "https://api.github.com/users/prashanth19bolukonda/following{/other_user}",
"gists_url": "https://api.github.com/users/prashanth19bolukonda/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/prashanth19bolukonda",
"id": 112316000,
"login": "prashanth19bolukonda",
"node_id": "U_kgDOBrHOYA",
"organizations_url": "https://api.github.com/users/prashanth19bolukonda/orgs",
"received_events_url": "https://api.github.com/users/prashanth19bolukonda/received_events",
"repos_url": "https://api.github.com/users/prashanth19bolukonda/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/prashanth19bolukonda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prashanth19bolukonda/subscriptions",
"type": "User",
"url": "https://api.github.com/users/prashanth19bolukonda"
} | [] | null | completed | NONE | 2024-02-16T14:43:53Z | null | I_kwDODunzps5_dPte | [
"Hi @prashanth19bolukonda,\r\n\r\nYou have to restart the notebook runtime session after the installation of `datasets`.\r\n\r\nDuplicate of:\r\n- #5923",
"Thank you soo much\r\n\r\nOn Fri, Feb 16, 2024 at 8:14 PM Albert Villanova del Moral <\r\n***@***.***> wrote:\r\n\r\n> Closed #6670 <https://github.com/huggingface/datasets/issues/6670> as\r\n> completed.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6670#event-11829788289>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/A2Y44YDQOBUFUWMR4C5O3QTYT5WDJAVCNFSM6AAAAABDL24S5SVHI2DSMVQWIX3LMV45UABCJFZXG5LFIV3GK3TUJZXXI2LGNFRWC5DJN5XDWMJRHAZDSNZYHAZDQOI>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6670/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6670 | https://github.com/huggingface/datasets/issues/6670 | false |
2,138,322,662 | https://api.github.com/repos/huggingface/datasets/issues/6669/labels{/name} | ### Describe the bug
AttributeError Traceback (most recent call last)
Cell In[39], line 2
1 # Start the training process
----> 2 trainer.train()
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1539, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1537 hf_hub_utils.enable_progress_bars()
1538 else:
-> 1539 return inner_training_loop(
1540 args=args,
1541 resume_from_checkpoint=resume_from_checkpoint,
1542 trial=trial,
1543 ignore_keys_for_eval=ignore_keys_for_eval,
1544 )
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1836, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1833 rng_to_sync = True
1835 step = -1
-> 1836 for step, inputs in enumerate(epoch_iterator):
1837 total_batched_samples += 1
1839 if self.args.include_num_input_tokens_seen:
File /opt/conda/lib/python3.10/site-packages/accelerate/data_loader.py:451, in DataLoaderShard.__iter__(self)
449 # We iterate one batch ahead to check when we are at the end
450 try:
--> 451 current_batch = next(dataloader_iter)
452 except StopIteration:
453 yield
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:630, in _BaseDataLoaderIter.__next__(self)
627 if self._sampler_iter is None:
628 # TODO([https://github.com/pytorch/pytorch/issues/76750)](https://github.com/pytorch/pytorch/issues/76750)%3C/span%3E)
629 self._reset() # type: ignore[call-arg]
--> 630 data = self._next_data()
631 self._num_yielded += 1
632 if self._dataset_kind == _DatasetKind.Iterable and \
633 self._IterableDataset_len_called is not None and \
634 self._num_yielded > self._IterableDataset_len_called:
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:674, in _SingleProcessDataLoaderIter._next_data(self)
672 def _next_data(self):
673 index = self._next_index() # may raise StopIteration
--> 674 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
675 if self._pin_memory:
676 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:51, in _MapDatasetFetcher.fetch(self, possibly_batched_index)
49 data = self.dataset.__getitems__(possibly_batched_index)
50 else:
---> 51 data = [self.dataset[idx] for idx in possibly_batched_index]
52 else:
53 data = self.dataset[possibly_batched_index]
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:51, in <listcomp>(.0)
49 data = self.dataset.__getitems__(possibly_batched_index)
50 else:
---> 51 data = [self.dataset[idx] for idx in possibly_batched_index]
52 else:
53 data = self.dataset[possibly_batched_index]
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1764, in Dataset.__getitem__(self, key)
1762 def __getitem__(self, key): # noqa: F811
1763 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 1764 return self._getitem(
1765 key,
1766 )
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1749, in Dataset._getitem(self, key, decoded, **kwargs)
1747 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs)
1748 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 1749 formatted_output = format_table(
1750 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1751 )
1752 return formatted_output
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:540, in format_table(table, key, formatter, format_columns, output_all_columns)
538 else:
539 pa_table_to_format = pa_table.drop(col for col in pa_table.column_names if col not in format_columns)
--> 540 formatted_output = formatter(pa_table_to_format, query_type=query_type)
541 if output_all_columns:
542 if isinstance(formatted_output, MutableMapping):
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/torch_formatter.py:57, in TorchFormatter.format_row(self, pa_table)
56 def format_row(self, pa_table: pa.Table) -> dict:
---> 57 row = self.numpy_arrow_extractor().extract_row(pa_table)
58 return self.recursive_tensorize(row)
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:154, in NumpyArrowExtractor.extract_row(self, pa_table)
153 def extract_row(self, pa_table: pa.Table) -> dict:
--> 154 return _unnest(self.extract_batch(pa_table))
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:160, in NumpyArrowExtractor.extract_batch(self, pa_table)
159 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 160 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:160, in <dictcomp>(.0)
159 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 160 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:196, in NumpyArrowExtractor._arrow_array_to_numpy(self, pa_array)
194 array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist()
195 if len(array) > 0:
--> 196 if any(
197 (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape))
198 or (isinstance(x, float) and np.isnan(x))
199 for x in array
200 ):
201 return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object})
202 return np.array(array, copy=False, **self.np_array_kwargs)
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:197, in <genexpr>(.0)
194 array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist()
195 if len(array) > 0:
196 if any(
--> 197 (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape))
198 or (isinstance(x, float) and np.isnan(x))
199 for x in array
200 ):
201 return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object})
202 return np.array(array, copy=False, **self.np_array_kwargs)
File /opt/conda/lib/python3.10/site-packages/numpy/__init__.py:324, in __getattr__(attr)
319 warnings.warn(
320 f"In the future `np.{attr}` will be defined as the "
321 "corresponding NumPy scalar.", FutureWarning, stacklevel=2)
323 if attr in __former_attrs__:
--> 324 raise AttributeError(__former_attrs__[attr])
326 if attr == 'testing':
327 import numpy.testing as testing
AttributeError: module 'numpy' has no attribute 'object'.
`np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecationsAttributeError Traceback (most recent call last)
Cell In[39], line 2
1 # Start the training process
----> 2 trainer.train()
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1539, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1537 hf_hub_utils.enable_progress_bars()
1538 else:
-> 1539 return inner_training_loop(
1540 args=args,
1541 resume_from_checkpoint=resume_from_checkpoint,
1542 trial=trial,
1543 ignore_keys_for_eval=ignore_keys_for_eval,
1544 )
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1836, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1833 rng_to_sync = True
1835 step = -1
-> 1836 for step, inputs in enumerate(epoch_iterator):
1837 total_batched_samples += 1
1839 if self.args.include_num_input_tokens_seen:
File /opt/conda/lib/python3.10/site-packages/accelerate/data_loader.py:451, in DataLoaderShard.__iter__(self)
449 # We iterate one batch ahead to check when we are at the end
450 try:
--> 451 current_batch = next(dataloader_iter)
452 except StopIteration:
453 yield
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:630, in _BaseDataLoaderIter.__next__(self)
627 if self._sampler_iter is None:
628 # TODO([https://github.com/pytorch/pytorch/issues/76750)](https://github.com/pytorch/pytorch/issues/76750)%3C/span%3E)
629 self._reset() # type: ignore[call-arg]
--> 630 data = self._next_data()
631 self._num_yielded += 1
632 if self._dataset_kind == _DatasetKind.Iterable and \
633 self._IterableDataset_len_called is not None and \
634 self._num_yielded > self._IterableDataset_len_called:
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:674, in _SingleProcessDataLoaderIter._next_data(self)
672 def _next_data(self):
673 index = self._next_index() # may raise StopIteration
--> 674 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
675 if self._pin_memory:
676 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:51, in _MapDatasetFetcher.fetch(self, possibly_batched_index)
49 data = self.dataset.__getitems__(possibly_batched_index)
50 else:
---> 51 data = [self.dataset[idx] for idx in possibly_batched_index]
52 else:
53 data = self.dataset[possibly_batched_index]
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:51, in <listcomp>(.0)
49 data = self.dataset.__getitems__(possibly_batched_index)
50 else:
---> 51 data = [self.dataset[idx] for idx in possibly_batched_index]
52 else:
53 data = self.dataset[possibly_batched_index]
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1764, in Dataset.__getitem__(self, key)
1762 def __getitem__(self, key): # noqa: F811
1763 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 1764 return self._getitem(
1765 key,
1766 )
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1749, in Dataset._getitem(self, key, decoded, **kwargs)
1747 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs)
1748 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 1749 formatted_output = format_table(
1750 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1751 )
1752 return formatted_output
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:540, in format_table(table, key, formatter, format_columns, output_all_columns)
538 else:
539 pa_table_to_format = pa_table.drop(col for col in pa_table.column_names if col not in format_columns)
--> 540 formatted_output = formatter(pa_table_to_format, query_type=query_type)
541 if output_all_columns:
542 if isinstance(formatted_output, MutableMapping):
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/torch_formatter.py:57, in TorchFormatter.format_row(self, pa_table)
56 def format_row(self, pa_table: pa.Table) -> dict:
---> 57 row = self.numpy_arrow_extractor().extract_row(pa_table)
58 return self.recursive_tensorize(row)
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:154, in NumpyArrowExtractor.extract_row(self, pa_table)
153 def extract_row(self, pa_table: pa.Table) -> dict:
--> 154 return _unnest(self.extract_batch(pa_table))
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:160, in NumpyArrowExtractor.extract_batch(self, pa_table)
159 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 160 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:160, in <dictcomp>(.0)
159 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 160 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:196, in NumpyArrowExtractor._arrow_array_to_numpy(self, pa_array)
194 array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist()
195 if len(array) > 0:
--> 196 if any(
197 (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape))
198 or (isinstance(x, float) and np.isnan(x))
199 for x in array
200 ):
201 return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object})
202 return np.array(array, copy=False, **self.np_array_kwargs)
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:197, in <genexpr>(.0)
194 array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist()
195 if len(array) > 0:
196 if any(
--> 197 (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape))
198 or (isinstance(x, float) and np.isnan(x))
199 for x in array
200 ):
201 return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object})
202 return np.array(array, copy=False, **self.np_array_kwargs)
File /opt/conda/lib/python3.10/site-packages/numpy/__init__.py:324, in __getattr__(attr)
319 warnings.warn(
320 f"In the future `np.{attr}` will be defined as the "
321 "corresponding NumPy scalar.", FutureWarning, stacklevel=2)
323 if attr in __former_attrs__:
--> 324 raise AttributeError(__former_attrs__[attr])
326 if attr == 'testing':
327 import numpy.testing as testing
AttributeError: module 'numpy' has no attribute 'object'.
`np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
Please help me to resolve the above error
### Steps to reproduce the bug
Please resolve the issue of deprecated function np.object to object in the numpy
### Expected behavior
np.object should be written as object only
### Environment info
kaggle notebook | 2024-03-01T10:58:00Z | 6,669 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-02-16T10:40:49Z | https://api.github.com/repos/huggingface/datasets/issues/6669/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6669/timeline | attribute error when writing trainer.train() | https://api.github.com/repos/huggingface/datasets/issues/6669/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/112316000?v=4",
"events_url": "https://api.github.com/users/prashanth19bolukonda/events{/privacy}",
"followers_url": "https://api.github.com/users/prashanth19bolukonda/followers",
"following_url": "https://api.github.com/users/prashanth19bolukonda/following{/other_user}",
"gists_url": "https://api.github.com/users/prashanth19bolukonda/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/prashanth19bolukonda",
"id": 112316000,
"login": "prashanth19bolukonda",
"node_id": "U_kgDOBrHOYA",
"organizations_url": "https://api.github.com/users/prashanth19bolukonda/orgs",
"received_events_url": "https://api.github.com/users/prashanth19bolukonda/received_events",
"repos_url": "https://api.github.com/users/prashanth19bolukonda/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/prashanth19bolukonda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prashanth19bolukonda/subscriptions",
"type": "User",
"url": "https://api.github.com/users/prashanth19bolukonda"
} | [] | null | completed | NONE | 2024-02-29T17:25:17Z | null | I_kwDODunzps5_dDbm | [
"Hi! Kaggle notebooks use an outdated version of `datasets`, so you should update the `datasets` installation (with `!pip install -U datasets`) to avoid the error.",
"Thank you for your response\r\n\r\nOn Thu, Feb 29, 2024 at 10:55 PM Mario Šaško ***@***.***>\r\nwrote:\r\n\r\n> Closed #6669 <https://github.com/huggingface/datasets/issues/6669> as\r\n> completed.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6669#event-11969246964>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/A2Y44YG2RRVMYONNKPLBVE3YV5SAPAVCNFSM6AAAAABDLZ3BTSVHI2DSMVQWIX3LMV45UABCJFZXG5LFIV3GK3TUJZXXI2LGNFRWC5DJN5XDWMJRHE3DSMRUGY4TMNA>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6669/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6669 | https://github.com/huggingface/datasets/issues/6669 | false |
2,137,859,935 | https://api.github.com/repos/huggingface/datasets/issues/6668/labels{/name} | ### Describe the bug
So I am getting this bug when I try to run cell 4 of the Chapter 6 notebook code:
`dataset = load_dataset("ccdv/cnn_dailymail", version="3.0.0")`
Error Message:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[4], line 4
1 #hide_output
2 from datasets import load_dataset
----> 4 dataset = load_dataset("ccdv/cnn_dailymail", version="3.0.0")
7 # dataset = load_dataset("ccdv/cnn_dailymail", version="3.0.0", trust_remote_code=True)
8 print(f"Features: {dataset['train'].column_names}")
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\load.py:2587, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2583 # Build dataset for splits
2584 keep_in_memory = (
2585 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
2586 )
-> 2587 ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)
2588 # Rename and cast features to match task schema
2589 if task is not None:
2590 # To avoid issuing the same warning twice
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\builder.py:1244, in DatasetBuilder.as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory)
1241 verification_mode = VerificationMode(verification_mode or VerificationMode.BASIC_CHECKS)
1243 # Create a dataset for each of the given splits
-> 1244 datasets = map_nested(
1245 partial(
1246 self._build_single_dataset,
1247 run_post_process=run_post_process,
1248 verification_mode=verification_mode,
1249 in_memory=in_memory,
1250 ),
1251 split,
1252 map_tuple=True,
1253 disable_tqdm=True,
1254 )
1255 if isinstance(datasets, dict):
1256 datasets = DatasetDict(datasets)
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\utils\py_utils.py:477, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc)
466 mapped = [
467 map_nested(
468 function=function,
(...)
474 for obj in iterable
475 ]
476 elif num_proc != -1 and num_proc <= 1 or len(iterable) < parallel_min_length:
--> 477 mapped = [
478 _single_map_nested((function, obj, types, None, True, None))
479 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc)
480 ]
481 else:
482 with warnings.catch_warnings():
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\utils\py_utils.py:478, in <listcomp>(.0)
466 mapped = [
467 map_nested(
468 function=function,
(...)
474 for obj in iterable
475 ]
476 elif num_proc != -1 and num_proc <= 1 or len(iterable) < parallel_min_length:
477 mapped = [
--> 478 _single_map_nested((function, obj, types, None, True, None))
479 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc)
480 ]
481 else:
482 with warnings.catch_warnings():
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\utils\py_utils.py:370, in _single_map_nested(args)
368 # Singleton first to spare some computation
369 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 370 return function(data_struct)
372 # Reduce logging to keep things readable in multiprocessing with tqdm
373 if rank is not None and logging.get_verbosity() < logging.WARNING:
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\builder.py:1274, in DatasetBuilder._build_single_dataset(self, split, run_post_process, verification_mode, in_memory)
1271 split = Split(split)
1273 # Build base dataset
-> 1274 ds = self._as_dataset(
1275 split=split,
1276 in_memory=in_memory,
1277 )
1278 if run_post_process:
1279 for resource_file_name in self._post_processing_resources(split).values():
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\builder.py:1348, in DatasetBuilder._as_dataset(self, split, in_memory)
1346 if self._check_legacy_cache():
1347 dataset_name = self.name
-> 1348 dataset_kwargs = ArrowReader(cache_dir, self.info).read(
1349 name=dataset_name,
1350 instructions=split,
1351 split_infos=self.info.splits.values(),
1352 in_memory=in_memory,
1353 )
1354 fingerprint = self._get_dataset_fingerprint(split)
1355 return Dataset(fingerprint=fingerprint, **dataset_kwargs)
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\arrow_reader.py:254, in BaseReader.read(self, name, instructions, split_infos, in_memory)
252 if not files:
253 msg = f'Instruction "{instructions}" corresponds to no data!'
--> 254 raise ValueError(msg)
255 return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
**ValueError: Instruction "validation" corresponds to no data!**
````
Looks like the data is not being loaded. Any advice would be appreciated. Thanks!
### Steps to reproduce the bug
Run all cells of Chapter 6 notebook.
### Expected behavior
Data should load correctly without any errors.
### Environment info
- `datasets` version: 2.17.0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.9.18
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | 2024-02-16T04:40:56Z | 6,668 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-02-16T04:40:56Z | https://api.github.com/repos/huggingface/datasets/issues/6668/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6668/timeline | Chapter 6 - Issue Loading `cnn_dailymail` dataset | https://api.github.com/repos/huggingface/datasets/issues/6668/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/34660389?v=4",
"events_url": "https://api.github.com/users/hariravichandran/events{/privacy}",
"followers_url": "https://api.github.com/users/hariravichandran/followers",
"following_url": "https://api.github.com/users/hariravichandran/following{/other_user}",
"gists_url": "https://api.github.com/users/hariravichandran/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hariravichandran",
"id": 34660389,
"login": "hariravichandran",
"node_id": "MDQ6VXNlcjM0NjYwMzg5",
"organizations_url": "https://api.github.com/users/hariravichandran/orgs",
"received_events_url": "https://api.github.com/users/hariravichandran/received_events",
"repos_url": "https://api.github.com/users/hariravichandran/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hariravichandran/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hariravichandran/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hariravichandran"
} | [] | null | null | NONE | null | null | I_kwDODunzps5_bSdf | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6668/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6668 | https://github.com/huggingface/datasets/issues/6668 | false |
2,137,769,552 | https://api.github.com/repos/huggingface/datasets/issues/6667/labels{/name} | ### Describe the bug
If you download Squad, it will download the plain_text version, but the config still specifies "default", so if you set the offline mode the cache will try to look it up according to the config_id which is "default" and this will say;
ValueError: Couldn't find cache for squad for config 'default'
Available configs in the cache: ['plain_text']
### Steps to reproduce the bug
1. export HF_DATASETS_OFFLINE=0
2. load_dataset("squad")
3. export HF_DATASETS_OFFLINE=1
4. load_dataset("squad")
### Expected behavior
We should change the config_name I guess?
### Environment info
linux, latest version of datasets | 2024-02-23T09:10:00Z | 6,667 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-02-16T02:36:55Z | https://api.github.com/repos/huggingface/datasets/issues/6667/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6667/timeline | Default config for squad is incorrect | https://api.github.com/repos/huggingface/datasets/issues/6667/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/22651617?v=4",
"events_url": "https://api.github.com/users/kiddyboots216/events{/privacy}",
"followers_url": "https://api.github.com/users/kiddyboots216/followers",
"following_url": "https://api.github.com/users/kiddyboots216/following{/other_user}",
"gists_url": "https://api.github.com/users/kiddyboots216/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kiddyboots216",
"id": 22651617,
"login": "kiddyboots216",
"node_id": "MDQ6VXNlcjIyNjUxNjE3",
"organizations_url": "https://api.github.com/users/kiddyboots216/orgs",
"received_events_url": "https://api.github.com/users/kiddyboots216/received_events",
"repos_url": "https://api.github.com/users/kiddyboots216/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kiddyboots216/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kiddyboots216/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kiddyboots216"
} | [] | null | null | NONE | null | null | I_kwDODunzps5_a8ZQ | [
"you can try: pip install datasets==2.16.1"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6667/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6667 | https://github.com/huggingface/datasets/issues/6667 | false |
2,136,136,425 | https://api.github.com/repos/huggingface/datasets/issues/6665/labels{/name} | Fix this code provided by @clefourrier
```python
import datasets
import os
token = os.getenv("TOKEN")
results = datasets.load_dataset("gaia-benchmark/results_public", "2023", token=token, download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)
results["test"] = datasets.Dataset.from_list([row for row in results["test"] if row["model"] != "StateFlow"])
results["test"].push_to_hub("gaia-benchmark/results_public", "2023", token=token, split="test")
```
```
ValueError Traceback (most recent call last)
Cell In[43], line 1
----> 1 results["test"].push_to_hub("gaia-benchmark/results_public", "2023", token=token, split="test")
File ~/miniconda3/envs/default310/lib/python3.10/site-packages/datasets/arrow_dataset.py:5498, in Dataset.push_to_hub(self, repo_id, config_name, split, private, token, branch, max_shard_size, num_shards, embed_external_files)
5496 repo_info.dataset_size = (repo_info.dataset_size or 0) + dataset_nbytes
5497 repo_info.size_in_bytes = repo_info.download_size + repo_info.dataset_size
-> 5498 repo_info.splits[split] = SplitInfo(
5499 split, num_bytes=dataset_nbytes, num_examples=len(self), dataset_name=dataset_name
5500 )
5501 info_to_dump = repo_info
5502 # create the metadata configs if it was uploaded with push_to_hub before metadata configs existed
File ~/miniconda3/envs/default310/lib/python3.10/site-packages/datasets/splits.py:541, in SplitDict.__setitem__(self, key, value)
539 raise ValueError(f"Cannot add elem. (key mismatch: '{key}' != '{value.name}')")
540 if key in self:
--> 541 raise ValueError(f"Split {key} already present")
542 super().__setitem__(key, value)
ValueError: Split test already present
``` | 2024-03-01T16:02:46Z | 6,665 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-15T10:17:08Z | https://api.github.com/repos/huggingface/datasets/issues/6665/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6665/timeline | Allow SplitDict setitem to replace existing SplitInfo | https://api.github.com/repos/huggingface/datasets/issues/6665/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2024-03-01T15:56:38Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6665.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6665",
"merged_at": "2024-03-01T15:56:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6665.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6665"
} | PR_kwDODunzps5m9JgW | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6665). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004968 / 0.011353 (-0.006385) | 0.003732 / 0.011008 (-0.007276) | 0.063672 / 0.038508 (0.025164) | 0.027066 / 0.023109 (0.003957) | 0.253306 / 0.275898 (-0.022592) | 0.283382 / 0.323480 (-0.040098) | 0.004217 / 0.007986 (-0.003768) | 0.002865 / 0.004328 (-0.001464) | 0.048672 / 0.004250 (0.044421) | 0.040740 / 0.037052 (0.003688) | 0.271848 / 0.258489 (0.013359) | 0.293162 / 0.293841 (-0.000679) | 0.027410 / 0.128546 (-0.101136) | 0.010605 / 0.075646 (-0.065042) | 0.210545 / 0.419271 (-0.208726) | 0.036085 / 0.043533 (-0.007447) | 0.259807 / 0.255139 (0.004668) | 0.274056 / 0.283200 (-0.009144) | 0.018812 / 0.141683 (-0.122871) | 1.116687 / 1.452155 (-0.335468) | 1.164276 / 1.492716 (-0.328440) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092874 / 0.018006 (0.074868) | 0.355897 / 0.000490 (0.355407) | 0.000224 / 0.000200 (0.000024) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018461 / 0.037411 (-0.018950) | 0.062061 / 0.014526 (0.047535) | 0.072353 / 0.176557 (-0.104203) | 0.119162 / 0.737135 (-0.617974) | 0.082974 / 0.296338 (-0.213364) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291631 / 0.215209 (0.076422) | 2.861495 / 2.077655 (0.783841) | 1.496753 / 1.504120 (-0.007367) | 1.371164 / 1.541195 (-0.170031) | 1.415473 / 1.468490 (-0.053018) | 0.566778 / 4.584777 (-4.017999) | 2.376209 / 3.745712 (-1.369503) | 2.812326 / 5.269862 (-2.457535) | 1.765640 / 4.565676 (-2.800037) | 0.063274 / 0.424275 (-0.361001) | 0.004933 / 0.007607 (-0.002674) | 0.342345 / 0.226044 (0.116301) | 3.407487 / 2.268929 (1.138558) | 1.856646 / 55.444624 (-53.587978) | 1.590284 / 6.876477 (-5.286193) | 1.610068 / 2.142072 (-0.532004) | 0.656007 / 4.805227 (-4.149220) | 0.118310 / 6.500664 (-6.382354) | 0.042596 / 0.075469 (-0.032873) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.991392 / 1.841788 (-0.850395) | 11.612397 / 8.074308 (3.538089) | 9.627836 / 10.191392 (-0.563556) | 0.130575 / 0.680424 (-0.549848) | 0.014152 / 0.534201 (-0.520049) | 0.289736 / 0.579283 (-0.289548) | 0.260041 / 0.434364 (-0.174323) | 0.339730 / 0.540337 (-0.200608) | 0.447529 / 1.386936 (-0.939407) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005315 / 0.011353 (-0.006038) | 0.003955 / 0.011008 (-0.007053) | 0.049618 / 0.038508 (0.011110) | 0.030404 / 0.023109 (0.007295) | 0.258727 / 0.275898 (-0.017171) | 0.282020 / 0.323480 (-0.041460) | 0.004356 / 0.007986 (-0.003629) | 0.002866 / 0.004328 (-0.001462) | 0.049122 / 0.004250 (0.044872) | 0.045534 / 0.037052 (0.008482) | 0.269560 / 0.258489 (0.011071) | 0.301225 / 0.293841 (0.007384) | 0.029786 / 0.128546 (-0.098761) | 0.010433 / 0.075646 (-0.065213) | 0.058222 / 0.419271 (-0.361049) | 0.052968 / 0.043533 (0.009435) | 0.256605 / 0.255139 (0.001467) | 0.279899 / 0.283200 (-0.003300) | 0.018233 / 0.141683 (-0.123450) | 1.164060 / 1.452155 (-0.288095) | 1.218049 / 1.492716 (-0.274667) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093646 / 0.018006 (0.075639) | 0.288804 / 0.000490 (0.288314) | 0.000224 / 0.000200 (0.000024) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022193 / 0.037411 (-0.015219) | 0.075507 / 0.014526 (0.060981) | 0.086091 / 0.176557 (-0.090465) | 0.127433 / 0.737135 (-0.609703) | 0.087064 / 0.296338 (-0.209274) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292459 / 0.215209 (0.077250) | 2.842430 / 2.077655 (0.764776) | 1.505824 / 1.504120 (0.001704) | 1.377052 / 1.541195 (-0.164143) | 1.408757 / 1.468490 (-0.059733) | 0.571705 / 4.584777 (-4.013072) | 2.459798 / 3.745712 (-1.285914) | 2.714826 / 5.269862 (-2.555035) | 1.782064 / 4.565676 (-2.783613) | 0.063113 / 0.424275 (-0.361162) | 0.005099 / 0.007607 (-0.002509) | 0.343624 / 0.226044 (0.117579) | 3.415806 / 2.268929 (1.146878) | 1.853253 / 55.444624 (-53.591371) | 1.584392 / 6.876477 (-5.292084) | 1.720384 / 2.142072 (-0.421689) | 0.646637 / 4.805227 (-4.158590) | 0.118072 / 6.500664 (-6.382593) | 0.041362 / 0.075469 (-0.034107) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.020086 / 1.841788 (-0.821701) | 12.303980 / 8.074308 (4.229672) | 10.322869 / 10.191392 (0.131477) | 0.140959 / 0.680424 (-0.539465) | 0.015372 / 0.534201 (-0.518829) | 0.288552 / 0.579283 (-0.290731) | 0.278243 / 0.434364 (-0.156121) | 0.328399 / 0.540337 (-0.211939) | 0.433618 / 1.386936 (-0.953318) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9469092d88ff7bb4d3f7fe6c2de0109ca458b5da \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6665/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6665 | https://github.com/huggingface/datasets/pull/6665 | true |
2,135,483,978 | https://api.github.com/repos/huggingface/datasets/issues/6664/labels{/name} | #6636 broke `write_examples_on_file` and `write_batch` from the class `ArrowWriter`. I'm undoing these changes. See #6663.
Note the current implementation doesn't keep the order of the columns and the schema, thus setting a wrong schema for each column. | 2024-02-16T14:02:39Z | 6,664 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-15T01:47:33Z | https://api.github.com/repos/huggingface/datasets/issues/6664/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6664/timeline | Revert the changes in `arrow_writer.py` from #6636 | https://api.github.com/repos/huggingface/datasets/issues/6664/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bryant1410",
"id": 3905501,
"login": "bryant1410",
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bryant1410"
} | [] | null | null | CONTRIBUTOR | 2024-02-16T02:31:11Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6664.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6664",
"merged_at": "2024-02-16T02:31:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6664.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6664"
} | PR_kwDODunzps5m67g0 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6664). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> Hi! We can't revert this as the \"reverted\" implementation has quadratic time complexity. Instead, let's fix it:\r\n\r\nI agree, but it's the implementation we have had so far. Why don't we:\r\n1. Release a hotfix ASAP (since would be doing a revert, we know it works as before) so people can continue using this library fine since AFAIU right now mostly writing examples for people is broken.\r\n2. Then, focus on still applying the performance improvement and release again",
"The fix is straightforward, so one patch release (after this PR is merged) is enough.\r\n\r\nBtw, let's also add a test to `tests/test_arrow_writer.py` to avoid this issue in the future.",
"> Btw, let's also add a test to tests/test_arrow_writer.py to avoid this issue in the future.\r\n\r\nWould you mind adding such test, as you're more familiar with the codebase?",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005083 / 0.011353 (-0.006270) | 0.003697 / 0.011008 (-0.007311) | 0.063302 / 0.038508 (0.024794) | 0.028866 / 0.023109 (0.005757) | 0.249987 / 0.275898 (-0.025911) | 0.270803 / 0.323480 (-0.052677) | 0.004096 / 0.007986 (-0.003890) | 0.002752 / 0.004328 (-0.001577) | 0.049156 / 0.004250 (0.044906) | 0.042936 / 0.037052 (0.005884) | 0.266907 / 0.258489 (0.008418) | 0.291462 / 0.293841 (-0.002379) | 0.027703 / 0.128546 (-0.100844) | 0.011006 / 0.075646 (-0.064641) | 0.206238 / 0.419271 (-0.213033) | 0.035446 / 0.043533 (-0.008087) | 0.248923 / 0.255139 (-0.006216) | 0.264141 / 0.283200 (-0.019058) | 0.017545 / 0.141683 (-0.124138) | 1.157145 / 1.452155 (-0.295009) | 1.199007 / 1.492716 (-0.293710) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092741 / 0.018006 (0.074734) | 0.299057 / 0.000490 (0.298567) | 0.000211 / 0.000200 (0.000011) | 0.000049 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017936 / 0.037411 (-0.019475) | 0.061552 / 0.014526 (0.047026) | 0.072938 / 0.176557 (-0.103618) | 0.118192 / 0.737135 (-0.618944) | 0.074589 / 0.296338 (-0.221750) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287186 / 0.215209 (0.071977) | 2.795694 / 2.077655 (0.718039) | 1.474386 / 1.504120 (-0.029734) | 1.359065 / 1.541195 (-0.182130) | 1.375295 / 1.468490 (-0.093196) | 0.569448 / 4.584777 (-4.015329) | 2.374428 / 3.745712 (-1.371284) | 2.770198 / 5.269862 (-2.499663) | 1.716346 / 4.565676 (-2.849330) | 0.063173 / 0.424275 (-0.361102) | 0.005031 / 0.007607 (-0.002576) | 0.333197 / 0.226044 (0.107153) | 3.271739 / 2.268929 (1.002811) | 1.826406 / 55.444624 (-53.618218) | 1.554537 / 6.876477 (-5.321939) | 1.565927 / 2.142072 (-0.576146) | 0.649796 / 4.805227 (-4.155431) | 0.118371 / 6.500664 (-6.382293) | 0.042536 / 0.075469 (-0.032933) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969882 / 1.841788 (-0.871906) | 11.638201 / 8.074308 (3.563893) | 9.759370 / 10.191392 (-0.432022) | 0.128069 / 0.680424 (-0.552355) | 0.013493 / 0.534201 (-0.520708) | 0.287324 / 0.579283 (-0.291959) | 0.267542 / 0.434364 (-0.166821) | 0.320072 / 0.540337 (-0.220265) | 0.421132 / 1.386936 (-0.965804) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005679 / 0.011353 (-0.005674) | 0.003746 / 0.011008 (-0.007262) | 0.050149 / 0.038508 (0.011641) | 0.034382 / 0.023109 (0.011273) | 0.289802 / 0.275898 (0.013904) | 0.314993 / 0.323480 (-0.008487) | 0.004488 / 0.007986 (-0.003498) | 0.002786 / 0.004328 (-0.001542) | 0.047987 / 0.004250 (0.043737) | 0.046589 / 0.037052 (0.009537) | 0.301420 / 0.258489 (0.042931) | 0.335384 / 0.293841 (0.041543) | 0.050701 / 0.128546 (-0.077845) | 0.010987 / 0.075646 (-0.064660) | 0.058292 / 0.419271 (-0.360979) | 0.033973 / 0.043533 (-0.009560) | 0.288923 / 0.255139 (0.033784) | 0.306263 / 0.283200 (0.023064) | 0.018856 / 0.141683 (-0.122827) | 1.160721 / 1.452155 (-0.291433) | 1.208151 / 1.492716 (-0.284565) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092633 / 0.018006 (0.074626) | 0.300353 / 0.000490 (0.299864) | 0.000219 / 0.000200 (0.000019) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022257 / 0.037411 (-0.015154) | 0.075417 / 0.014526 (0.060892) | 0.087289 / 0.176557 (-0.089268) | 0.125416 / 0.737135 (-0.611720) | 0.088751 / 0.296338 (-0.207588) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286477 / 0.215209 (0.071268) | 2.801931 / 2.077655 (0.724277) | 1.553034 / 1.504120 (0.048914) | 1.426152 / 1.541195 (-0.115043) | 1.443824 / 1.468490 (-0.024666) | 0.563298 / 4.584777 (-4.021479) | 2.428968 / 3.745712 (-1.316744) | 2.685964 / 5.269862 (-2.583897) | 1.752304 / 4.565676 (-2.813372) | 0.064174 / 0.424275 (-0.360101) | 0.005079 / 0.007607 (-0.002528) | 0.344899 / 0.226044 (0.118855) | 3.372528 / 2.268929 (1.103600) | 1.900723 / 55.444624 (-53.543901) | 1.623721 / 6.876477 (-5.252756) | 1.781009 / 2.142072 (-0.361064) | 0.655229 / 4.805227 (-4.149998) | 0.116050 / 6.500664 (-6.384614) | 0.040374 / 0.075469 (-0.035095) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004714 / 1.841788 (-0.837074) | 12.108179 / 8.074308 (4.033871) | 10.233447 / 10.191392 (0.042055) | 0.141438 / 0.680424 (-0.538986) | 0.015387 / 0.534201 (-0.518814) | 0.288068 / 0.579283 (-0.291216) | 0.277025 / 0.434364 (-0.157339) | 0.331714 / 0.540337 (-0.208623) | 0.424209 / 1.386936 (-0.962727) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bdebf1922663c30744efb8869c86b28f102b84dd \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6664/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6664 | https://github.com/huggingface/datasets/pull/6664 | true |
2,135,480,811 | https://api.github.com/repos/huggingface/datasets/issues/6663/labels{/name} | ### Describe the bug
`write_examples_on_file` and `write_batch` are broken in `ArrowWriter` since #6636. The order between the columns and the schema is not preserved anymore. So these functions don't work anymore unless the order happens to align well.
### Steps to reproduce the bug
Try to do `write_batch` with anything that has many columns, and it's likely to break.
### Expected behavior
I expect these functions to work, instead of it trying to cast a column to its incorrect type.
### Environment info
- `datasets` version: 2.17.0
- Platform: Linux-5.15.0-1040-aws-x86_64-with-glibc2.35
- Python version: 3.10.13
- `huggingface_hub` version: 0.19.4
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | 2024-02-16T09:25:00Z | 6,663 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-02-15T01:43:27Z | https://api.github.com/repos/huggingface/datasets/issues/6663/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6663/timeline | `write_examples_on_file` and `write_batch` are broken in `ArrowWriter` | https://api.github.com/repos/huggingface/datasets/issues/6663/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bryant1410",
"id": 3905501,
"login": "bryant1410",
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bryant1410"
} | [] | null | completed | CONTRIBUTOR | 2024-02-16T09:25:00Z | null | I_kwDODunzps5_SNnr | [
"Thanks for reporting! I've left some comments on the PR on how to fix this recent change rather than reverting it.",
"> Thanks for reporting! I've left some comments on the PR on how to fix this recent change rather than reverting it.\r\n\r\nI feel that'd be good, but it'd be great to release a hotfix ASAP (a revert is a fast thing to do) so people can continue using this library and then focus on still applying the improvement.",
"Fixed by #6664 "
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6663/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6663 | https://github.com/huggingface/datasets/issues/6663 | false |
2,132,425,812 | https://api.github.com/repos/huggingface/datasets/issues/6662/labels{/name} | When you try to download a dataset that uses [biopython](https://github.com/biopython/biopython), like `load_dataset("InstaDeepAI/multi_species_genomes")`, you get the error:
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("InstaDeepAI/multi_species_genomes")
/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py:1454: FutureWarning: The repository for InstaDeepAI/multi_species_genomes contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/InstaDeepAI/multi_species_genomes
You can avoid this message in future by passing the argument `trust_remote_code=True`.
Passing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`.
warnings.warn(
Downloading builder script: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.51k/7.51k [00:00<00:00, 7.67MB/s]
Downloading readme: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 17.2k/17.2k [00:00<00:00, 11.0MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 2548, in load_dataset
builder_instance = load_dataset_builder(
File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 2220, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 1871, in dataset_module_factory
raise e1 from None
File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 1844, in dataset_module_factory
).get_module()
File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 1466, in get_module
local_imports = _download_additional_modules(
File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 346, in _download_additional_modules
raise ImportError(
ImportError: To be able to use InstaDeepAI/multi_species_genomes, you need to install the following dependency: Bio.
Please install it using 'pip install Bio' for instance.
>>>
```
`Bio` comes from the `biopython` package that can be installed with `pip install biopython`, not with `pip install Bio` as suggested.
This PR adds special logic to show the correct package name in the error message of ` _download_additional_modules`, similarly as is done for `sklearn` / `scikit-learn` already.
There are more packages where importable module name differs from the PyPI package name, so this could be made more generic, like:
```
# Mapping of importable module names to their PyPI package names
package_map = {
"sklearn": "scikit-learn",
"Bio": "biopython",
"PIL": "Pillow",
"bs4": "beautifulsoup4"
}
for module_name, pypi_name in package_map.items():
if module_name in needs_to_be_installed.keys():
needs_to_be_installed[module_name] = pypi_name
``` | 2024-03-01T17:49:48Z | 6,662 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-13T14:15:04Z | https://api.github.com/repos/huggingface/datasets/issues/6662/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6662/timeline | fix: show correct package name to install biopython | https://api.github.com/repos/huggingface/datasets/issues/6662/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/59344?v=4",
"events_url": "https://api.github.com/users/BioGeek/events{/privacy}",
"followers_url": "https://api.github.com/users/BioGeek/followers",
"following_url": "https://api.github.com/users/BioGeek/following{/other_user}",
"gists_url": "https://api.github.com/users/BioGeek/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BioGeek",
"id": 59344,
"login": "BioGeek",
"node_id": "MDQ6VXNlcjU5MzQ0",
"organizations_url": "https://api.github.com/users/BioGeek/orgs",
"received_events_url": "https://api.github.com/users/BioGeek/received_events",
"repos_url": "https://api.github.com/users/BioGeek/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BioGeek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BioGeek/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BioGeek"
} | [] | null | null | CONTRIBUTOR | 2024-03-01T17:43:39Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6662.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6662",
"merged_at": "2024-03-01T17:43:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6662.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6662"
} | PR_kwDODunzps5mwgKP | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6662). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005135 / 0.011353 (-0.006218) | 0.003666 / 0.011008 (-0.007342) | 0.062660 / 0.038508 (0.024152) | 0.028656 / 0.023109 (0.005546) | 0.249601 / 0.275898 (-0.026297) | 0.265745 / 0.323480 (-0.057735) | 0.002935 / 0.007986 (-0.005051) | 0.002606 / 0.004328 (-0.001723) | 0.048774 / 0.004250 (0.044523) | 0.043643 / 0.037052 (0.006591) | 0.263114 / 0.258489 (0.004625) | 0.284596 / 0.293841 (-0.009245) | 0.027818 / 0.128546 (-0.100728) | 0.010726 / 0.075646 (-0.064921) | 0.205900 / 0.419271 (-0.213371) | 0.035646 / 0.043533 (-0.007887) | 0.245599 / 0.255139 (-0.009540) | 0.267706 / 0.283200 (-0.015493) | 0.018441 / 0.141683 (-0.123242) | 1.143365 / 1.452155 (-0.308790) | 1.191823 / 1.492716 (-0.300893) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089703 / 0.018006 (0.071696) | 0.298073 / 0.000490 (0.297583) | 0.000209 / 0.000200 (0.000009) | 0.000042 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018068 / 0.037411 (-0.019343) | 0.061416 / 0.014526 (0.046890) | 0.075989 / 0.176557 (-0.100567) | 0.120765 / 0.737135 (-0.616370) | 0.075476 / 0.296338 (-0.220863) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284043 / 0.215209 (0.068834) | 2.770282 / 2.077655 (0.692627) | 1.473040 / 1.504120 (-0.031080) | 1.349064 / 1.541195 (-0.192131) | 1.362783 / 1.468490 (-0.105708) | 0.560765 / 4.584777 (-4.024012) | 2.357731 / 3.745712 (-1.387981) | 2.745771 / 5.269862 (-2.524090) | 1.726764 / 4.565676 (-2.838913) | 0.061212 / 0.424275 (-0.363063) | 0.004902 / 0.007607 (-0.002705) | 0.336963 / 0.226044 (0.110919) | 3.324519 / 2.268929 (1.055591) | 1.825826 / 55.444624 (-53.618798) | 1.548811 / 6.876477 (-5.327666) | 1.570618 / 2.142072 (-0.571454) | 0.642411 / 4.805227 (-4.162816) | 0.116068 / 6.500664 (-6.384596) | 0.042433 / 0.075469 (-0.033036) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.988402 / 1.841788 (-0.853386) | 11.509601 / 8.074308 (3.435293) | 9.555338 / 10.191392 (-0.636054) | 0.138728 / 0.680424 (-0.541696) | 0.014107 / 0.534201 (-0.520094) | 0.285465 / 0.579283 (-0.293818) | 0.263086 / 0.434364 (-0.171278) | 0.327469 / 0.540337 (-0.212869) | 0.444799 / 1.386936 (-0.942137) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005359 / 0.011353 (-0.005993) | 0.003605 / 0.011008 (-0.007403) | 0.049734 / 0.038508 (0.011226) | 0.029792 / 0.023109 (0.006683) | 0.276384 / 0.275898 (0.000486) | 0.297915 / 0.323480 (-0.025564) | 0.004949 / 0.007986 (-0.003036) | 0.002713 / 0.004328 (-0.001616) | 0.049499 / 0.004250 (0.045249) | 0.044969 / 0.037052 (0.007917) | 0.284558 / 0.258489 (0.026069) | 0.315170 / 0.293841 (0.021329) | 0.029457 / 0.128546 (-0.099089) | 0.010573 / 0.075646 (-0.065073) | 0.058191 / 0.419271 (-0.361080) | 0.051461 / 0.043533 (0.007928) | 0.270744 / 0.255139 (0.015605) | 0.291664 / 0.283200 (0.008465) | 0.018607 / 0.141683 (-0.123076) | 1.158799 / 1.452155 (-0.293355) | 1.210509 / 1.492716 (-0.282208) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090277 / 0.018006 (0.072270) | 0.298748 / 0.000490 (0.298258) | 0.000228 / 0.000200 (0.000028) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021850 / 0.037411 (-0.015561) | 0.075433 / 0.014526 (0.060907) | 0.087171 / 0.176557 (-0.089386) | 0.125828 / 0.737135 (-0.611308) | 0.090343 / 0.296338 (-0.205996) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297267 / 0.215209 (0.082058) | 2.865234 / 2.077655 (0.787579) | 1.595024 / 1.504120 (0.090904) | 1.476100 / 1.541195 (-0.065094) | 1.494896 / 1.468490 (0.026406) | 0.569086 / 4.584777 (-4.015691) | 2.401976 / 3.745712 (-1.343736) | 2.676091 / 5.269862 (-2.593771) | 1.742087 / 4.565676 (-2.823590) | 0.065161 / 0.424275 (-0.359114) | 0.005006 / 0.007607 (-0.002602) | 0.342302 / 0.226044 (0.116257) | 3.450571 / 2.268929 (1.181643) | 1.928754 / 55.444624 (-53.515871) | 1.672823 / 6.876477 (-5.203653) | 1.798830 / 2.142072 (-0.343243) | 0.648730 / 4.805227 (-4.156498) | 0.116433 / 6.500664 (-6.384231) | 0.040683 / 0.075469 (-0.034786) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.006158 / 1.841788 (-0.835630) | 12.200093 / 8.074308 (4.125785) | 10.180691 / 10.191392 (-0.010701) | 0.146620 / 0.680424 (-0.533804) | 0.015621 / 0.534201 (-0.518580) | 0.287956 / 0.579283 (-0.291327) | 0.277231 / 0.434364 (-0.157133) | 0.323815 / 0.540337 (-0.216522) | 0.429655 / 1.386936 (-0.957281) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#273e16f9a21d6eaba1fd40fbdf0c05e66642c5a7 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6662/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6662 | https://github.com/huggingface/datasets/pull/6662 | true |
2,132,296,267 | https://api.github.com/repos/huggingface/datasets/issues/6661/labels{/name} | ### Describe the bug
Cannot be imported on Google Colab, the import throws the following error:
ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject
### Steps to reproduce the bug
1. `! pip install -U datasets`
2. `import datasets`
### Expected behavior
Should be possible to use the library
### Environment info
- `datasets` version: 2.17.0
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.6.0 | 2024-02-25T16:37:54Z | 6,661 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-02-13T13:12:40Z | https://api.github.com/repos/huggingface/datasets/issues/6661/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6661/timeline | Import error on Google Colab | https://api.github.com/repos/huggingface/datasets/issues/6661/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/16103566?v=4",
"events_url": "https://api.github.com/users/kithogue/events{/privacy}",
"followers_url": "https://api.github.com/users/kithogue/followers",
"following_url": "https://api.github.com/users/kithogue/following{/other_user}",
"gists_url": "https://api.github.com/users/kithogue/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kithogue",
"id": 16103566,
"login": "kithogue",
"node_id": "MDQ6VXNlcjE2MTAzNTY2",
"organizations_url": "https://api.github.com/users/kithogue/orgs",
"received_events_url": "https://api.github.com/users/kithogue/received_events",
"repos_url": "https://api.github.com/users/kithogue/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kithogue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kithogue/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kithogue"
} | [] | null | completed | NONE | 2024-02-14T08:04:47Z | null | I_kwDODunzps5_GEJL | [
"Hi! This can happen if an incompatible `pyarrow` version (`pyarrow<12.0.0`) has been imported before the `datasets` installation and the Colab session hasn't been restarted afterward. To avoid the error, go to \"Runtime -> Restart session\" after `!pip install -U datasets` and before `import datasets`, or insert the `import os; os.kill(os.getpid(), 9)` cell between `!pip install -U datasets` and `import datasets` to do the same programmatically.",
"One possible cause might be the one pointed out by @mariosasko above, and you get the following warning on Colab:\r\n```\r\nWARNING: The following packages were previously imported in this runtime:\r\n [pyarrow]\r\nYou must restart the runtime in order to use newly installed versions.\r\n```\r\n\r\nOn the other hand, if the old version of `pyarrow` is not previously imported (before the installation of `datasets`), the reported issue here is not reproducible: `datasets` can be installed, imported and used on Colab.",
"Duplicate of:\r\n- #5923",
"Google Colab now pre-installs PyArrow 14.0.2, making this issue unlikely to happen. So, I'm unpinning it."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6661/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6661 | https://github.com/huggingface/datasets/issues/6661 | false |
2,131,977,011 | https://api.github.com/repos/huggingface/datasets/issues/6660/labels{/name} | This PR addresses an issue encountered when utilizing uint16 or uint32 datatypes with datasets, followed by attempting to convert these datasets into PyTorch-compatible formats. Currently, doing so results in a TypeError due to incompatible datatype conversion, as illustrated by the following example:
```python
from datasets import Dataset, Sequence, Value, Features
def gen():
for i in range(100):
yield {'seq': list(range(i, i + 20))}
ds = Dataset.from_generator(gen, features=Features({'seq': Sequence(feature=Value(dtype='uint16'), length=-1)}))
ds.set_format('torch')
print(ds[0])
```
This code snippet triggers the following error due to the inability to convert numpy.uint16 arrays to a PyTorch-supported format:
```
TypeError: can't convert np.ndarray of type numpy.uint16. The only supported types are: float64, float32, float16, complex64, complex128, int64, int32, int16, int8, uint8, and bool.
```
This PR introduces an automatic mechanism to convert np.uint16 and np.uint32 datatypes to np.int64 for seamless compatibility with PyTorch formats, simplifying workflows and improving developer experience by eliminating the need for manual conversion handling. | 2024-03-01T19:01:57Z | 6,660 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-13T10:24:33Z | https://api.github.com/repos/huggingface/datasets/issues/6660/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6660/timeline | Automatic Conversion for uint16/uint32 to Compatible PyTorch Dtypes | https://api.github.com/repos/huggingface/datasets/issues/6660/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/23399590?v=4",
"events_url": "https://api.github.com/users/mohalisad/events{/privacy}",
"followers_url": "https://api.github.com/users/mohalisad/followers",
"following_url": "https://api.github.com/users/mohalisad/following{/other_user}",
"gists_url": "https://api.github.com/users/mohalisad/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mohalisad",
"id": 23399590,
"login": "mohalisad",
"node_id": "MDQ6VXNlcjIzMzk5NTkw",
"organizations_url": "https://api.github.com/users/mohalisad/orgs",
"received_events_url": "https://api.github.com/users/mohalisad/received_events",
"repos_url": "https://api.github.com/users/mohalisad/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mohalisad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mohalisad/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mohalisad"
} | [] | null | null | CONTRIBUTOR | 2024-03-01T18:52:37Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6660.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6660",
"merged_at": "2024-03-01T18:52:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6660.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6660"
} | PR_kwDODunzps5mu9wU | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6660). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004995 / 0.011353 (-0.006357) | 0.003230 / 0.011008 (-0.007779) | 0.062836 / 0.038508 (0.024328) | 0.026684 / 0.023109 (0.003575) | 0.249286 / 0.275898 (-0.026612) | 0.272936 / 0.323480 (-0.050544) | 0.003952 / 0.007986 (-0.004033) | 0.002708 / 0.004328 (-0.001620) | 0.055346 / 0.004250 (0.051095) | 0.040023 / 0.037052 (0.002971) | 0.263350 / 0.258489 (0.004860) | 0.294727 / 0.293841 (0.000886) | 0.027280 / 0.128546 (-0.101266) | 0.010273 / 0.075646 (-0.065373) | 0.206035 / 0.419271 (-0.213236) | 0.035715 / 0.043533 (-0.007818) | 0.255474 / 0.255139 (0.000335) | 0.273960 / 0.283200 (-0.009240) | 0.018563 / 0.141683 (-0.123120) | 1.140013 / 1.452155 (-0.312142) | 1.188655 / 1.492716 (-0.304062) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091895 / 0.018006 (0.073888) | 0.284621 / 0.000490 (0.284131) | 0.000213 / 0.000200 (0.000013) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018610 / 0.037411 (-0.018801) | 0.061554 / 0.014526 (0.047028) | 0.072454 / 0.176557 (-0.104103) | 0.120283 / 0.737135 (-0.616853) | 0.073744 / 0.296338 (-0.222595) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288850 / 0.215209 (0.073641) | 2.836761 / 2.077655 (0.759107) | 1.533407 / 1.504120 (0.029287) | 1.409394 / 1.541195 (-0.131801) | 1.421667 / 1.468490 (-0.046823) | 0.566183 / 4.584777 (-4.018594) | 2.390670 / 3.745712 (-1.355043) | 2.732031 / 5.269862 (-2.537831) | 1.730886 / 4.565676 (-2.834791) | 0.064280 / 0.424275 (-0.359995) | 0.004959 / 0.007607 (-0.002648) | 0.342664 / 0.226044 (0.116619) | 3.398969 / 2.268929 (1.130040) | 1.887354 / 55.444624 (-53.557270) | 1.572955 / 6.876477 (-5.303522) | 1.596179 / 2.142072 (-0.545894) | 0.645844 / 4.805227 (-4.159383) | 0.118050 / 6.500664 (-6.382614) | 0.042158 / 0.075469 (-0.033311) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.959170 / 1.841788 (-0.882617) | 11.276491 / 8.074308 (3.202183) | 9.471198 / 10.191392 (-0.720194) | 0.128346 / 0.680424 (-0.552078) | 0.013851 / 0.534201 (-0.520350) | 0.286125 / 0.579283 (-0.293158) | 0.266915 / 0.434364 (-0.167449) | 0.332811 / 0.540337 (-0.207526) | 0.444780 / 1.386936 (-0.942156) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005665 / 0.011353 (-0.005687) | 0.003267 / 0.011008 (-0.007741) | 0.050238 / 0.038508 (0.011730) | 0.032882 / 0.023109 (0.009773) | 0.269320 / 0.275898 (-0.006578) | 0.293140 / 0.323480 (-0.030340) | 0.004127 / 0.007986 (-0.003858) | 0.002728 / 0.004328 (-0.001601) | 0.049360 / 0.004250 (0.045109) | 0.043764 / 0.037052 (0.006712) | 0.291211 / 0.258489 (0.032722) | 0.319745 / 0.293841 (0.025904) | 0.029371 / 0.128546 (-0.099175) | 0.010212 / 0.075646 (-0.065434) | 0.059064 / 0.419271 (-0.360207) | 0.051148 / 0.043533 (0.007615) | 0.276698 / 0.255139 (0.021559) | 0.292329 / 0.283200 (0.009129) | 0.018349 / 0.141683 (-0.123334) | 1.150816 / 1.452155 (-0.301338) | 1.184292 / 1.492716 (-0.308425) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091646 / 0.018006 (0.073640) | 0.301737 / 0.000490 (0.301247) | 0.000214 / 0.000200 (0.000014) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021529 / 0.037411 (-0.015883) | 0.075596 / 0.014526 (0.061070) | 0.087912 / 0.176557 (-0.088645) | 0.125240 / 0.737135 (-0.611895) | 0.088035 / 0.296338 (-0.208303) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.305097 / 0.215209 (0.089888) | 2.979612 / 2.077655 (0.901957) | 1.647009 / 1.504120 (0.142889) | 1.520251 / 1.541195 (-0.020944) | 1.513361 / 1.468490 (0.044870) | 0.571733 / 4.584777 (-4.013044) | 2.415587 / 3.745712 (-1.330125) | 2.615983 / 5.269862 (-2.653879) | 1.732637 / 4.565676 (-2.833039) | 0.062830 / 0.424275 (-0.361445) | 0.004972 / 0.007607 (-0.002635) | 0.348559 / 0.226044 (0.122515) | 3.450567 / 2.268929 (1.181639) | 1.970743 / 55.444624 (-53.473882) | 1.702232 / 6.876477 (-5.174245) | 1.799592 / 2.142072 (-0.342480) | 0.649477 / 4.805227 (-4.155751) | 0.115940 / 6.500664 (-6.384724) | 0.040364 / 0.075469 (-0.035105) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.000014 / 1.841788 (-0.841773) | 11.937886 / 8.074308 (3.863578) | 10.169478 / 10.191392 (-0.021914) | 0.153359 / 0.680424 (-0.527064) | 0.015205 / 0.534201 (-0.518996) | 0.287812 / 0.579283 (-0.291471) | 0.278688 / 0.434364 (-0.155676) | 0.322831 / 0.540337 (-0.217507) | 0.425631 / 1.386936 (-0.961305) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6e176efbed29374e7c2cd33da64aeeae3c11ca0f \"CML watermark\")\n"
] | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6660/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6660 | https://github.com/huggingface/datasets/pull/6660 | true |
2,129,229,810 | https://api.github.com/repos/huggingface/datasets/issues/6659/labels{/name} | Change default compression type from `None` to "infer", to align with pandas' defaults.
Documentation asks the user to supply `to_json_kwargs` with arguments suitable for pandas' `to_json` method. At the same time, while pandas' by default uses ["infer"](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_json.html) for compression, datasets enforce `None` as default. This, likely, confuses user, as they expect the same behaviour, i.e they expect that if they name their output file as "dataset.jsonl.zst" then the compression would be inferred as "zstd" and file will be compressed before writing.
Moreover, while it is probably outside of the scope of this pull request, `compression` argument needs to be capable of taking `dict` as input (along with `str`), as it does in pandas, in order to allow user to specify compression parameters. Current implementation will likely fail with `NotImplementedError`, as it expects either `None` or `str` specifying compression algo. | 2024-03-01T17:51:50Z | 6,659 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-11T23:49:07Z | https://api.github.com/repos/huggingface/datasets/issues/6659/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6659/timeline | Change default compression argument for JsonDatasetWriter | https://api.github.com/repos/huggingface/datasets/issues/6659/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/5154447?v=4",
"events_url": "https://api.github.com/users/Rexhaif/events{/privacy}",
"followers_url": "https://api.github.com/users/Rexhaif/followers",
"following_url": "https://api.github.com/users/Rexhaif/following{/other_user}",
"gists_url": "https://api.github.com/users/Rexhaif/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rexhaif",
"id": 5154447,
"login": "Rexhaif",
"node_id": "MDQ6VXNlcjUxNTQ0NDc=",
"organizations_url": "https://api.github.com/users/Rexhaif/orgs",
"received_events_url": "https://api.github.com/users/Rexhaif/received_events",
"repos_url": "https://api.github.com/users/Rexhaif/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rexhaif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rexhaif/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rexhaif"
} | [] | null | null | CONTRIBUTOR | 2024-03-01T17:44:55Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6659.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6659",
"merged_at": "2024-03-01T17:44:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6659.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6659"
} | PR_kwDODunzps5mlmmo | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6659). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Can someone check this out?",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005008 / 0.011353 (-0.006345) | 0.003267 / 0.011008 (-0.007741) | 0.064140 / 0.038508 (0.025632) | 0.027419 / 0.023109 (0.004309) | 0.246692 / 0.275898 (-0.029206) | 0.271303 / 0.323480 (-0.052177) | 0.004127 / 0.007986 (-0.003859) | 0.002698 / 0.004328 (-0.001631) | 0.050415 / 0.004250 (0.046165) | 0.040323 / 0.037052 (0.003271) | 0.265738 / 0.258489 (0.007249) | 0.291556 / 0.293841 (-0.002285) | 0.027924 / 0.128546 (-0.100622) | 0.010206 / 0.075646 (-0.065441) | 0.207106 / 0.419271 (-0.212165) | 0.036087 / 0.043533 (-0.007446) | 0.250412 / 0.255139 (-0.004727) | 0.269014 / 0.283200 (-0.014186) | 0.018102 / 0.141683 (-0.123581) | 1.135137 / 1.452155 (-0.317018) | 1.177718 / 1.492716 (-0.314998) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095557 / 0.018006 (0.077550) | 0.306235 / 0.000490 (0.305745) | 0.000214 / 0.000200 (0.000014) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018217 / 0.037411 (-0.019194) | 0.060993 / 0.014526 (0.046467) | 0.072748 / 0.176557 (-0.103808) | 0.119357 / 0.737135 (-0.617778) | 0.073719 / 0.296338 (-0.222619) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295924 / 0.215209 (0.080715) | 2.901071 / 2.077655 (0.823417) | 1.497316 / 1.504120 (-0.006804) | 1.371232 / 1.541195 (-0.169962) | 1.395643 / 1.468490 (-0.072847) | 0.577548 / 4.584777 (-4.007229) | 2.383813 / 3.745712 (-1.361899) | 2.764451 / 5.269862 (-2.505411) | 1.733074 / 4.565676 (-2.832602) | 0.063730 / 0.424275 (-0.360545) | 0.004933 / 0.007607 (-0.002674) | 0.347135 / 0.226044 (0.121090) | 3.390814 / 2.268929 (1.121885) | 1.849454 / 55.444624 (-53.595170) | 1.561801 / 6.876477 (-5.314675) | 1.587818 / 2.142072 (-0.554254) | 0.652061 / 4.805227 (-4.153166) | 0.117195 / 6.500664 (-6.383469) | 0.041922 / 0.075469 (-0.033548) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.949050 / 1.841788 (-0.892738) | 11.353664 / 8.074308 (3.279355) | 9.261581 / 10.191392 (-0.929811) | 0.140374 / 0.680424 (-0.540050) | 0.014254 / 0.534201 (-0.519946) | 0.288124 / 0.579283 (-0.291159) | 0.262888 / 0.434364 (-0.171476) | 0.330774 / 0.540337 (-0.209564) | 0.444777 / 1.386936 (-0.942159) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005162 / 0.011353 (-0.006191) | 0.003418 / 0.011008 (-0.007591) | 0.049764 / 0.038508 (0.011256) | 0.029336 / 0.023109 (0.006226) | 0.278570 / 0.275898 (0.002672) | 0.300676 / 0.323480 (-0.022804) | 0.004292 / 0.007986 (-0.003694) | 0.002745 / 0.004328 (-0.001584) | 0.049194 / 0.004250 (0.044943) | 0.044036 / 0.037052 (0.006984) | 0.299258 / 0.258489 (0.040769) | 0.324451 / 0.293841 (0.030610) | 0.029777 / 0.128546 (-0.098769) | 0.010426 / 0.075646 (-0.065221) | 0.057267 / 0.419271 (-0.362004) | 0.051276 / 0.043533 (0.007743) | 0.278012 / 0.255139 (0.022873) | 0.297099 / 0.283200 (0.013899) | 0.018340 / 0.141683 (-0.123343) | 1.179255 / 1.452155 (-0.272899) | 1.231536 / 1.492716 (-0.261180) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092546 / 0.018006 (0.074540) | 0.299959 / 0.000490 (0.299469) | 0.000220 / 0.000200 (0.000020) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021657 / 0.037411 (-0.015755) | 0.075440 / 0.014526 (0.060914) | 0.086246 / 0.176557 (-0.090310) | 0.126511 / 0.737135 (-0.610624) | 0.091303 / 0.296338 (-0.205036) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294775 / 0.215209 (0.079566) | 2.868973 / 2.077655 (0.791319) | 1.666971 / 1.504120 (0.162851) | 1.545680 / 1.541195 (0.004486) | 1.559983 / 1.468490 (0.091493) | 0.572191 / 4.584777 (-4.012586) | 2.429317 / 3.745712 (-1.316395) | 2.673334 / 5.269862 (-2.596527) | 1.758114 / 4.565676 (-2.807563) | 0.063766 / 0.424275 (-0.360509) | 0.005070 / 0.007607 (-0.002537) | 0.345488 / 0.226044 (0.119443) | 3.464525 / 2.268929 (1.195596) | 1.975717 / 55.444624 (-53.468908) | 1.686671 / 6.876477 (-5.189806) | 1.825434 / 2.142072 (-0.316638) | 0.655853 / 4.805227 (-4.149374) | 0.116372 / 6.500664 (-6.384292) | 0.040647 / 0.075469 (-0.034822) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.014080 / 1.841788 (-0.827707) | 12.038496 / 8.074308 (3.964188) | 10.354536 / 10.191392 (0.163144) | 0.130285 / 0.680424 (-0.550139) | 0.015514 / 0.534201 (-0.518687) | 0.284743 / 0.579283 (-0.294540) | 0.280275 / 0.434364 (-0.154088) | 0.321175 / 0.540337 (-0.219162) | 0.425840 / 1.386936 (-0.961096) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4bb6c6d46a171c4fa1b65167cb81998e2f863892 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6659/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6659 | https://github.com/huggingface/datasets/pull/6659 | true |
2,129,158,371 | https://api.github.com/repos/huggingface/datasets/issues/6658/labels{/name} | A simple implementation of a mechanism to resume an IterableDataset.
This is WIP and untested.
Example:
```python
from datasets import Dataset, concatenate_datasets
ds = Dataset.from_dict({"a": range(5)}).to_iterable_dataset(num_shards=3)
ds = concatenate_datasets([ds] * 2)
print(f"{ds.state_dict()=}")
for i, example in enumerate(ds):
print(example)
if i == 6:
state_dict = ds.state_dict()
ds.load_state_dict(state_dict)
print(f"{ds.state_dict()=}")
for example in ds:
print(example)
```
returns
```
ds.state_dict()={'ex_iterable_idx': 0, 'ex_iterables': [{'shard_idx': 0, 'shard_example_idx': 0}, {'shard_idx': 0, 'shard_example_idx': 0}]}
{'a': 0}
{'a': 1}
{'a': 2}
{'a': 3}
{'a': 4}
{'a': 0}
{'a': 1}
{'a': 2}
{'a': 3}
{'a': 4}
ds.state_dict()={'ex_iterable_idx': 1, 'ex_iterables': [{'shard_idx': 3, 'shard_example_idx': 0}, {'shard_idx': 0, 'shard_example_idx': 2}]}
{'a': 2}
{'a': 3}
{'a': 4}
``` | 2024-02-12T12:24:32Z | 6,658 | null | https://api.github.com/repos/huggingface/datasets | true | [] | 2024-02-11T20:35:52Z | https://api.github.com/repos/huggingface/datasets/issues/6658/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6658/timeline | [Resumable IterableDataset] Add IterableDataset state_dict | https://api.github.com/repos/huggingface/datasets/issues/6658/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6658.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6658",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6658.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6658"
} | PR_kwDODunzps5mlZyb | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6658). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6658/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6658 | https://github.com/huggingface/datasets/pull/6658 | true |
2,129,147,085 | https://api.github.com/repos/huggingface/datasets/issues/6657/labels{/name} | ### Describe the bug
The github actions step to publish the release 2.17.0 to conda channel has failed due to expired token. Can some one please update the anaconda token rerun the failed action? @albertvillanova ?
![image](https://github.com/huggingface/datasets/assets/7138162/1b56ad3d-7643-4778-9cce-4bf531717700)
### Steps to reproduce the bug
Please see this actions [link](https://github.com/huggingface/datasets/actions/runs/7842473662)
### Expected behavior
The action runs successfully and the latest release is pushed to HuggingFace conda channel
### Environment info
Not applicable. | 2024-03-06T15:06:22Z | 6,657 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-02-11T20:05:17Z | https://api.github.com/repos/huggingface/datasets/issues/6657/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/6657/timeline | Release not pushed to conda channel | https://api.github.com/repos/huggingface/datasets/issues/6657/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/7138162?v=4",
"events_url": "https://api.github.com/users/atulsaurav/events{/privacy}",
"followers_url": "https://api.github.com/users/atulsaurav/followers",
"following_url": "https://api.github.com/users/atulsaurav/following{/other_user}",
"gists_url": "https://api.github.com/users/atulsaurav/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/atulsaurav",
"id": 7138162,
"login": "atulsaurav",
"node_id": "MDQ6VXNlcjcxMzgxNjI=",
"organizations_url": "https://api.github.com/users/atulsaurav/orgs",
"received_events_url": "https://api.github.com/users/atulsaurav/received_events",
"repos_url": "https://api.github.com/users/atulsaurav/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/atulsaurav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/atulsaurav/subscriptions",
"type": "User",
"url": "https://api.github.com/users/atulsaurav"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | completed | NONE | 2024-03-06T15:06:22Z | null | I_kwDODunzps5-6DTN | [
"Thanks for reporting, @atulsaurav.\r\n\r\nWe are investigating the issue. ",
"I can't fix this issue because I do not appear as a team member of the huggingface datasets project: https://anaconda.org/huggingface/datasets\r\n\r\n@lhoestq could you please add `datasets` team members to the corresponding Anaconda project?\r\n\r\nOnce this done, I could recreate and update the Anaconda token, as mentioned above it seems the current one has expired.",
"I think @LysandreJik has access ?",
"FYI it failed for 2.18.0 too: https://github.com/huggingface/datasets/actions/runs/8117132330/job/22188677936",
"We updated the token and I re-ran the conda releases :)"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6657/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6657 | https://github.com/huggingface/datasets/issues/6657 | false |
2,127,338,377 | https://api.github.com/repos/huggingface/datasets/issues/6656/labels{/name} | ### Describe the bug
When trying to load big json files from a local directory, `load_dataset` throws the following error
```
Traceback (most recent call last):
File "/miniconda3/envs/conda-env/lib/python3.10/site-packages/datasets/builder.py", line 1989, in _prepare_split_single
writer.write_table(table)
File "miniconda3/envs/conda-env/lib/python3.10/site-packages/datasets/arrow_writer.py", line 573, in write_table
pa_table = pa_table.combine_chunks()
File "pyarrow/table.pxi", line 3638, in pyarrow.lib.Table.combine_chunks
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays
```
### Steps to reproduce the bug
1. Download a big file, e.g. `https://dl.fbaipublicfiles.com/dpr/data/retriever/biencoder-nq-train.json.gz`
2. Load it like `data = load_dataset("json", data_files=["nq-train.json"], split="train")`
```python
from datasets import load_dataset
data = load_dataset("json", data_files=["nq-train.json"], split="train")
```
A similarly formatted but smaller file, e.g. e.g. `https://dl.fbaipublicfiles.com/dpr/data/retriever/biencoder-nq-dev.json.gz` is loaded without issues
```python
from datasets import load_dataset
data = load_dataset("json", data_files=["nq-dev.json"], split="train")
```
### Expected behavior
It should load normally
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | 2024-03-15T22:18:21Z | 6,656 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-02-09T15:14:21Z | https://api.github.com/repos/huggingface/datasets/issues/6656/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6656/timeline | Error when loading a big local json file | https://api.github.com/repos/huggingface/datasets/issues/6656/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/10062216?v=4",
"events_url": "https://api.github.com/users/Riccorl/events{/privacy}",
"followers_url": "https://api.github.com/users/Riccorl/followers",
"following_url": "https://api.github.com/users/Riccorl/following{/other_user}",
"gists_url": "https://api.github.com/users/Riccorl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Riccorl",
"id": 10062216,
"login": "Riccorl",
"node_id": "MDQ6VXNlcjEwMDYyMjE2",
"organizations_url": "https://api.github.com/users/Riccorl/orgs",
"received_events_url": "https://api.github.com/users/Riccorl/received_events",
"repos_url": "https://api.github.com/users/Riccorl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Riccorl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Riccorl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Riccorl"
} | [] | null | null | NONE | null | null | I_kwDODunzps5-zJuJ | [
"I get similar when dealing with a large jsonl file (6k lines), \r\n\r\n> TypeError: Couldn't cast array of type timestamp[us] to null\r\n\r\nYet when I split it into 1k lines, files, load_dataset works fine!\r\n\r\nhttps://github.com/huggingface/course/issues/692\r\n\r\n"
] | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6656/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6656 | https://github.com/huggingface/datasets/issues/6656 | false |
2,127,020,042 | https://api.github.com/repos/huggingface/datasets/issues/6655/labels{/name} | ### Describe the bug
When I run the following code I get an exception;
`go_emotions = load_dataset("go_emotions")`
> AttributeError Traceback (most recent call last)
Cell In[6], [line 1](vscode-notebook-cell:?execution_count=6&line=1)
----> [1](vscode-notebook-cell:?execution_count=6&line=1) go_emotions = load_dataset("go_emotions")
[2](vscode-notebook-cell:?execution_count=6&line=2) data = go_emotions.data
File [c:\Users\hijik\anaconda3\Lib\site-packages\datasets\load.py:2523](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2523), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
[2518](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2518) verification_mode = VerificationMode(
[2519](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2519) (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
[2520](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2520) )
[2522](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2522) # Create a dataset builder
-> [2523](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2523) builder_instance = load_dataset_builder(
[2524](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2524) path=path,
[2525](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2525) name=name,
[2526](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2526) data_dir=data_dir,
[2527](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2527) data_files=data_files,
[2528](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2528) cache_dir=cache_dir,
[2529](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2529) features=features,
[2530](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2530) download_config=download_config,
[2531](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2531) download_mode=download_mode,
[2532](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2532) revision=revision,
[2533](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2533) token=token,
[2534](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2534) storage_options=storage_options,
[2535](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2535) trust_remote_code=trust_remote_code,
[2536](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2536) _require_default_config_name=name is None,
...
---> [63](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:63) if issubclass(obj_type, transformers.PreTrainedTokenizerBase):
[64](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:64) pklregister(obj_type)(_save_transformersPreTrainedTokenizerBase)
[66](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:66) # Unwrap `torch.compile`-ed functions
AttributeError: module 'transformers' has no attribute 'PreTrainedTokenizerBase'
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?10bc0728-6947-456e-9a3e-f056872b04c6) or open in a [text editor](command:workbench.action.openLargeOutput?10bc0728-6947-456e-9a3e-f056872b04c6). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
### Steps to reproduce the bug
```
from datasets import load_dataset
go_emotions = load_dataset("go_emotions")
```
### Expected behavior
Should simply load the variable with the data from the file
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.16.1
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.11.4
- `huggingface_hub` version: 0.20.3
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.10.0 | 2024-02-12T09:35:55Z | 6,655 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-02-09T12:15:39Z | https://api.github.com/repos/huggingface/datasets/issues/6655/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/6655/timeline | Cannot load the dataset go_emotions | https://api.github.com/repos/huggingface/datasets/issues/6655/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/688324?v=4",
"events_url": "https://api.github.com/users/arame/events{/privacy}",
"followers_url": "https://api.github.com/users/arame/followers",
"following_url": "https://api.github.com/users/arame/following{/other_user}",
"gists_url": "https://api.github.com/users/arame/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/arame",
"id": 688324,
"login": "arame",
"node_id": "MDQ6VXNlcjY4ODMyNA==",
"organizations_url": "https://api.github.com/users/arame/orgs",
"received_events_url": "https://api.github.com/users/arame/received_events",
"repos_url": "https://api.github.com/users/arame/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/arame/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arame/subscriptions",
"type": "User",
"url": "https://api.github.com/users/arame"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | null | NONE | null | null | I_kwDODunzps5-x8AK | [
"Thanks for reporting, @arame.\r\n\r\nI guess you have an old version of `transformers` (that submodule is present in `transformers` since version 3.0.1, since nearly 4 years ago). If you update it, the error should disappear:\r\n```shell\r\npip install -U transformers\r\n```\r\n\r\nOn the other hand, I am wondering: does it make sense to use `transformers` in this case, even if we don't need it to load the `go_emotions` dataset (already converted to Parquet files)?\r\n- Maybe @mariosasko can give some insight, as he included these code lines:\r\n - #6454\r\n\r\nhttps://github.com/huggingface/datasets/blob/9751fb14594d354e952f0ebdfaf31cb203b011e7/src/datasets/utils/_dill.py#L60-L63\r\n",
"The linked code lazily registers a custom reducer for `transformers.PreTrainedTokenizerBase` only if `transformers` have already been imported (imports are expensive, so we check `sys.modules`).\r\n\r\nHowever, the logic does not account for `transformers<3`, so we should add a version check to fix that.",
"> The linked code lazily registers a custom reducer for `transformers.PreTrainedTokenizerBase` only if `transformers` have already been imported (imports are expensive, so we check `sys.modules`).\r\n> \r\n> However, the logic does not account for `transformers<3`, so we should add a version check to fix that.\r\n\r\nThank you for that Mario. Would this fix solve the problem and do you have any idea when it will be done? \r\nI tried the pip install suggested by Albert and it made no difference.",
"I tried running the code today and the problem appears to be fixed."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6655/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6655 | https://github.com/huggingface/datasets/issues/6655 | false |
2,126,939,358 | https://api.github.com/repos/huggingface/datasets/issues/6654/labels{/name} | ### Describe the bug
I encountered a TypeError when batch processing a dataset with Sequence features in datasets package version 2.16.1. The error arises from a mismatch in handling fixed-size list arrays during the map function execution. Debugging pinpoints the issue to an if-statement in datasets/table.py, line 2093, failing to correctly process sequence lengths.
### Steps to reproduce the bug
Create virtual environment and activate
```
virtualenv venv
source venv/bin/activate
```
Then install the datasets package (I'm using the latest version)
```
pip install datasets==2.16.1
```
Then run
```python
# bug.py
from datasets import Dataset
from datasets.features import Features, Sequence, Value
data = {
"num": [[1, 2], [3, 4]],
}
features = Features({'num': Sequence(feature=Value(dtype='int32'), length=2)})
dataset = Dataset.from_dict(data, features=features)
dataset.map(lambda x: x, batched=True, batch_size=1)
```
### Expected behavior
I get the following stack trace
```
Map: 50%|█████ | 1/2 [00:00<00:00, 423.92 examples/s]
Traceback (most recent call last):
File "/PATH/TO/BUG_PORT/bug.py", line 9, in <module>
dataset.map(lambda x: x, batched=True, batch_size=1)
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 592, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3093, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3489, in _map_single
writer.write_batch(batch)
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 551, in write_batch
array = cast_array_to_feature(col_values, col_type) if col_type is not None else col_values
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/table.py", line 2111, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
fixed_size_list<item: int32>[2]
to
Sequence(feature=Value(dtype='int32', id=None), length=2, id=None)
```
After some debugging, I found that the if-statement that is actually failing is line 2093 in `datasets/table.py`
```python
# datasets/table.py
...
2093 if feature.length * len(array) == len(array_values):
2094 return pa.FixedSizeListArray.from_arrays(_c(array_values, feature.feature), feature.length)
...
```
### Environment info
Platform: MacOS
Datasets version: datasets==2.16.1
Python version: 3.9.6 | 2024-02-12T08:26:53Z | 6,654 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-02-09T11:23:19Z | https://api.github.com/repos/huggingface/datasets/issues/6654/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6654/timeline | Batched dataset map throws exception that cannot cast fixed length array to Sequence | https://api.github.com/repos/huggingface/datasets/issues/6654/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1029671?v=4",
"events_url": "https://api.github.com/users/keesjandevries/events{/privacy}",
"followers_url": "https://api.github.com/users/keesjandevries/followers",
"following_url": "https://api.github.com/users/keesjandevries/following{/other_user}",
"gists_url": "https://api.github.com/users/keesjandevries/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/keesjandevries",
"id": 1029671,
"login": "keesjandevries",
"node_id": "MDQ6VXNlcjEwMjk2NzE=",
"organizations_url": "https://api.github.com/users/keesjandevries/orgs",
"received_events_url": "https://api.github.com/users/keesjandevries/received_events",
"repos_url": "https://api.github.com/users/keesjandevries/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/keesjandevries/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keesjandevries/subscriptions",
"type": "User",
"url": "https://api.github.com/users/keesjandevries"
} | [] | null | completed | NONE | 2024-02-12T08:26:53Z | null | I_kwDODunzps5-xoTe | [
"Hi ! This issue has been fixed by https://github.com/huggingface/datasets/pull/6283\r\n\r\nCan you try again with the new release 2.17.0 ?\r\n\r\n```\r\npip install -U datasets\r\n```\r\n\r\n",
"Amazing! It's indeed fixed now. Thanks!"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6654/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6654 | https://github.com/huggingface/datasets/issues/6654 | false |
2,126,831,929 | https://api.github.com/repos/huggingface/datasets/issues/6653/labels{/name} | null | 2024-02-09T10:18:20Z | 6,653 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-09T10:12:02Z | https://api.github.com/repos/huggingface/datasets/issues/6653/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6653/timeline | Set dev version | https://api.github.com/repos/huggingface/datasets/issues/6653/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | null | MEMBER | 2024-02-09T10:12:12Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6653.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6653",
"merged_at": "2024-02-09T10:12:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6653.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6653"
} | PR_kwDODunzps5mdv5S | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6653). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005076 / 0.011353 (-0.006277) | 0.003424 / 0.011008 (-0.007584) | 0.064195 / 0.038508 (0.025687) | 0.031742 / 0.023109 (0.008633) | 0.244774 / 0.275898 (-0.031124) | 0.268529 / 0.323480 (-0.054951) | 0.003970 / 0.007986 (-0.004016) | 0.002657 / 0.004328 (-0.001672) | 0.048847 / 0.004250 (0.044597) | 0.042196 / 0.037052 (0.005144) | 0.266044 / 0.258489 (0.007555) | 0.282400 / 0.293841 (-0.011441) | 0.027617 / 0.128546 (-0.100929) | 0.010400 / 0.075646 (-0.065246) | 0.205910 / 0.419271 (-0.213362) | 0.035820 / 0.043533 (-0.007713) | 0.247750 / 0.255139 (-0.007389) | 0.267318 / 0.283200 (-0.015882) | 0.017980 / 0.141683 (-0.123703) | 1.107263 / 1.452155 (-0.344892) | 1.173208 / 1.492716 (-0.319509) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095830 / 0.018006 (0.077824) | 0.293891 / 0.000490 (0.293401) | 0.000257 / 0.000200 (0.000057) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018138 / 0.037411 (-0.019273) | 0.061631 / 0.014526 (0.047105) | 0.073038 / 0.176557 (-0.103519) | 0.118317 / 0.737135 (-0.618818) | 0.074190 / 0.296338 (-0.222148) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287026 / 0.215209 (0.071817) | 2.786137 / 2.077655 (0.708482) | 1.472575 / 1.504120 (-0.031544) | 1.346919 / 1.541195 (-0.194276) | 1.388535 / 1.468490 (-0.079955) | 0.565731 / 4.584777 (-4.019046) | 2.382573 / 3.745712 (-1.363139) | 2.736926 / 5.269862 (-2.532935) | 1.716517 / 4.565676 (-2.849159) | 0.062168 / 0.424275 (-0.362108) | 0.004924 / 0.007607 (-0.002683) | 0.341897 / 0.226044 (0.115853) | 3.355715 / 2.268929 (1.086787) | 1.837014 / 55.444624 (-53.607611) | 1.532063 / 6.876477 (-5.344414) | 1.548193 / 2.142072 (-0.593880) | 0.634995 / 4.805227 (-4.170232) | 0.115622 / 6.500664 (-6.385042) | 0.042252 / 0.075469 (-0.033217) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.970713 / 1.841788 (-0.871075) | 11.727576 / 8.074308 (3.653268) | 9.806524 / 10.191392 (-0.384868) | 0.127622 / 0.680424 (-0.552802) | 0.014140 / 0.534201 (-0.520061) | 0.286832 / 0.579283 (-0.292451) | 0.266556 / 0.434364 (-0.167808) | 0.325940 / 0.540337 (-0.214398) | 0.421839 / 1.386936 (-0.965097) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005495 / 0.011353 (-0.005858) | 0.003676 / 0.011008 (-0.007332) | 0.054361 / 0.038508 (0.015853) | 0.030743 / 0.023109 (0.007633) | 0.277200 / 0.275898 (0.001302) | 0.313459 / 0.323480 (-0.010021) | 0.004316 / 0.007986 (-0.003670) | 0.002750 / 0.004328 (-0.001578) | 0.049491 / 0.004250 (0.045241) | 0.044268 / 0.037052 (0.007215) | 0.292529 / 0.258489 (0.034039) | 0.326524 / 0.293841 (0.032683) | 0.048040 / 0.128546 (-0.080507) | 0.010390 / 0.075646 (-0.065256) | 0.058459 / 0.419271 (-0.360813) | 0.033765 / 0.043533 (-0.009768) | 0.276003 / 0.255139 (0.020864) | 0.297299 / 0.283200 (0.014099) | 0.018532 / 0.141683 (-0.123151) | 1.157639 / 1.452155 (-0.294515) | 1.220492 / 1.492716 (-0.272225) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093903 / 0.018006 (0.075897) | 0.303005 / 0.000490 (0.302515) | 0.000224 / 0.000200 (0.000024) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021580 / 0.037411 (-0.015831) | 0.076176 / 0.014526 (0.061650) | 0.086998 / 0.176557 (-0.089558) | 0.124148 / 0.737135 (-0.612987) | 0.088613 / 0.296338 (-0.207725) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300623 / 0.215209 (0.085414) | 2.911876 / 2.077655 (0.834221) | 1.588398 / 1.504120 (0.084278) | 1.471251 / 1.541195 (-0.069944) | 1.505528 / 1.468490 (0.037038) | 0.570635 / 4.584777 (-4.014142) | 2.485769 / 3.745712 (-1.259943) | 2.785355 / 5.269862 (-2.484507) | 1.752944 / 4.565676 (-2.812732) | 0.063146 / 0.424275 (-0.361129) | 0.004980 / 0.007607 (-0.002627) | 0.354577 / 0.226044 (0.128532) | 3.477181 / 2.268929 (1.208253) | 1.951906 / 55.444624 (-53.492718) | 1.677169 / 6.876477 (-5.199307) | 1.686338 / 2.142072 (-0.455735) | 0.637156 / 4.805227 (-4.168071) | 0.117732 / 6.500664 (-6.382932) | 0.041091 / 0.075469 (-0.034378) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.010071 / 1.841788 (-0.831717) | 12.172242 / 8.074308 (4.097934) | 10.422811 / 10.191392 (0.231419) | 0.137185 / 0.680424 (-0.543239) | 0.014643 / 0.534201 (-0.519558) | 0.287248 / 0.579283 (-0.292035) | 0.272779 / 0.434364 (-0.161585) | 0.331761 / 0.540337 (-0.208576) | 0.417266 / 1.386936 (-0.969670) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9751fb14594d354e952f0ebdfaf31cb203b011e7 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6653/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6653 | https://github.com/huggingface/datasets/pull/6653 | true |
2,126,760,798 | https://api.github.com/repos/huggingface/datasets/issues/6652/labels{/name} | null | 2024-02-09T10:11:48Z | 6,652 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-09T09:25:01Z | https://api.github.com/repos/huggingface/datasets/issues/6652/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6652/timeline | Release: 2.17.0 | https://api.github.com/repos/huggingface/datasets/issues/6652/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | null | MEMBER | 2024-02-09T10:05:35Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6652.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6652",
"merged_at": "2024-02-09T10:05:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6652.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6652"
} | PR_kwDODunzps5mdgcv | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6652). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005207 / 0.011353 (-0.006145) | 0.003785 / 0.011008 (-0.007223) | 0.064221 / 0.038508 (0.025713) | 0.028981 / 0.023109 (0.005872) | 0.246215 / 0.275898 (-0.029683) | 0.268058 / 0.323480 (-0.055422) | 0.004028 / 0.007986 (-0.003958) | 0.002804 / 0.004328 (-0.001525) | 0.048878 / 0.004250 (0.044627) | 0.042641 / 0.037052 (0.005589) | 0.255590 / 0.258489 (-0.002899) | 0.287377 / 0.293841 (-0.006464) | 0.027772 / 0.128546 (-0.100774) | 0.010637 / 0.075646 (-0.065009) | 0.211526 / 0.419271 (-0.207746) | 0.035789 / 0.043533 (-0.007744) | 0.243042 / 0.255139 (-0.012097) | 0.268369 / 0.283200 (-0.014830) | 0.017907 / 0.141683 (-0.123776) | 1.138829 / 1.452155 (-0.313326) | 1.175732 / 1.492716 (-0.316984) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094205 / 0.018006 (0.076199) | 0.304317 / 0.000490 (0.303827) | 0.000206 / 0.000200 (0.000006) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018424 / 0.037411 (-0.018987) | 0.061719 / 0.014526 (0.047193) | 0.073471 / 0.176557 (-0.103085) | 0.121577 / 0.737135 (-0.615558) | 0.075134 / 0.296338 (-0.221204) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275178 / 0.215209 (0.059969) | 2.689222 / 2.077655 (0.611568) | 1.396680 / 1.504120 (-0.107439) | 1.278782 / 1.541195 (-0.262413) | 1.326632 / 1.468490 (-0.141858) | 0.566915 / 4.584777 (-4.017862) | 2.365928 / 3.745712 (-1.379784) | 2.785435 / 5.269862 (-2.484427) | 1.745131 / 4.565676 (-2.820546) | 0.062798 / 0.424275 (-0.361477) | 0.005107 / 0.007607 (-0.002500) | 0.330441 / 0.226044 (0.104396) | 3.266265 / 2.268929 (0.997337) | 1.792588 / 55.444624 (-53.652036) | 1.516021 / 6.876477 (-5.360455) | 1.562750 / 2.142072 (-0.579323) | 0.652964 / 4.805227 (-4.152264) | 0.117813 / 6.500664 (-6.382852) | 0.042372 / 0.075469 (-0.033097) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.010107 / 1.841788 (-0.831680) | 11.819910 / 8.074308 (3.745602) | 9.701673 / 10.191392 (-0.489719) | 0.178165 / 0.680424 (-0.502259) | 0.014438 / 0.534201 (-0.519763) | 0.297733 / 0.579283 (-0.281550) | 0.264914 / 0.434364 (-0.169450) | 0.324531 / 0.540337 (-0.215806) | 0.430207 / 1.386936 (-0.956729) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005848 / 0.011353 (-0.005505) | 0.003870 / 0.011008 (-0.007138) | 0.050379 / 0.038508 (0.011871) | 0.031238 / 0.023109 (0.008129) | 0.276839 / 0.275898 (0.000941) | 0.299488 / 0.323480 (-0.023992) | 0.005143 / 0.007986 (-0.002842) | 0.002725 / 0.004328 (-0.001604) | 0.048184 / 0.004250 (0.043934) | 0.046232 / 0.037052 (0.009180) | 0.287058 / 0.258489 (0.028569) | 0.322659 / 0.293841 (0.028818) | 0.047598 / 0.128546 (-0.080949) | 0.011116 / 0.075646 (-0.064530) | 0.058252 / 0.419271 (-0.361019) | 0.033404 / 0.043533 (-0.010128) | 0.277650 / 0.255139 (0.022511) | 0.295610 / 0.283200 (0.012410) | 0.018124 / 0.141683 (-0.123559) | 1.135052 / 1.452155 (-0.317103) | 1.194261 / 1.492716 (-0.298456) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095595 / 0.018006 (0.077588) | 0.306408 / 0.000490 (0.305918) | 0.000216 / 0.000200 (0.000016) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022027 / 0.037411 (-0.015385) | 0.076224 / 0.014526 (0.061698) | 0.087441 / 0.176557 (-0.089116) | 0.126636 / 0.737135 (-0.610499) | 0.089442 / 0.296338 (-0.206896) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291315 / 0.215209 (0.076106) | 2.835304 / 2.077655 (0.757650) | 1.581102 / 1.504120 (0.076982) | 1.463046 / 1.541195 (-0.078149) | 1.481982 / 1.468490 (0.013492) | 0.559989 / 4.584777 (-4.024788) | 2.385262 / 3.745712 (-1.360450) | 2.773478 / 5.269862 (-2.496383) | 1.744427 / 4.565676 (-2.821249) | 0.062687 / 0.424275 (-0.361589) | 0.005149 / 0.007607 (-0.002458) | 0.374600 / 0.226044 (0.148555) | 3.376507 / 2.268929 (1.107579) | 1.935290 / 55.444624 (-53.509334) | 1.663227 / 6.876477 (-5.213250) | 1.678987 / 2.142072 (-0.463085) | 0.638970 / 4.805227 (-4.166258) | 0.120000 / 6.500664 (-6.380664) | 0.040862 / 0.075469 (-0.034608) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.008795 / 1.841788 (-0.832993) | 12.275084 / 8.074308 (4.200776) | 10.340088 / 10.191392 (0.148696) | 0.136454 / 0.680424 (-0.543970) | 0.014404 / 0.534201 (-0.519797) | 0.289478 / 0.579283 (-0.289805) | 0.279243 / 0.434364 (-0.155121) | 0.330992 / 0.540337 (-0.209346) | 0.422043 / 1.386936 (-0.964893) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#70633576ecf1f3f5e5cdfd8c9189246b3604f4b6 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6652/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6652 | https://github.com/huggingface/datasets/pull/6652 | true |
2,126,649,626 | https://api.github.com/repos/huggingface/datasets/issues/6651/labels{/name} | ### Feature request
Support for slice splits in `datasets.load_from_disk`, similar to how it's already supported for `datasets.load_dataset`. See https://www.nature.com/articles/s41551-023-01093-3.
### Motivation
Slice splits are convienient in a numer of cases - adding support to `datasets.load_from_disk` would make working with local datasets easier and homogenize the APIs of load_from_disk and load_dataset.
### Your contribution
Sure, if the devs think the feature request is sensible. | 2024-02-09T08:00:21Z | 6,651 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-02-09T08:00:21Z | https://api.github.com/repos/huggingface/datasets/issues/6651/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6651/timeline | Slice splits support for datasets.load_from_disk | https://api.github.com/repos/huggingface/datasets/issues/6651/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/37439882?v=4",
"events_url": "https://api.github.com/users/mhorlacher/events{/privacy}",
"followers_url": "https://api.github.com/users/mhorlacher/followers",
"following_url": "https://api.github.com/users/mhorlacher/following{/other_user}",
"gists_url": "https://api.github.com/users/mhorlacher/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mhorlacher",
"id": 37439882,
"login": "mhorlacher",
"node_id": "MDQ6VXNlcjM3NDM5ODgy",
"organizations_url": "https://api.github.com/users/mhorlacher/orgs",
"received_events_url": "https://api.github.com/users/mhorlacher/received_events",
"repos_url": "https://api.github.com/users/mhorlacher/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mhorlacher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mhorlacher/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mhorlacher"
} | [] | null | null | NONE | null | null | I_kwDODunzps5-whka | [] | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6651/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6651 | https://github.com/huggingface/datasets/issues/6651 | false |
2,125,680,991 | https://api.github.com/repos/huggingface/datasets/issues/6650/labels{/name} | ### Describe the bug
```
Traceback (most recent call last):
File "finetune.py", line 103, in <module>
main(args)
File "finetune.py", line 45, in main
data_tokenized = data.map(partial(funcs.tokenize_function, tokenizer,
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/dataset_dict.py", line 868, in map
{
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/dataset_dict.py", line 869, in <dictcomp>
k: dataset.map(
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 592, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3093, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3432, in _map_single
arrow_formatted_shard = shard.with_format("arrow")
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2667, in with_format
dataset = copy.deepcopy(self)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 153, in deepcopy
y = copier(memo)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/table.py", line 176, in __deepcopy__
memo[id(self._batches)] = list(self._batches)
AttributeError: 'InMemoryTable' object has no attribute '_batches'
```
### Steps to reproduce the bug
I'm running an MLOps flow using AzureML.
The error appears when I run the following function in my training script:
```python
data_tokenized = data.map(partial(funcs.tokenize_function, tokenizer,
seq_length),
batched=True,
batch_size=batch_size,
remove_columns=['col1', 'col2'])
```
```python
def tokenize_function(tok, seq_length, example)
# Pad so that each batch has the same sequence length
inp = tok(example['col1'], padding=True, truncation=True)
outp = tok(example['col2'], padding="max_length", max_length=seq_length)
res = {
'input_ids': inp['input_ids'],
'attention_mask': inp['attention_mask'],
'decoder_input_ids': outp['input_ids'],
'labels': outp['input_ids'],
'decoder_attention_mask': outp['attention_mask']
}
return res
```
### Expected behavior
Processing proceeds without errors. I ran this same workflow 2 weeks ago without a problem. I recreated the environment since then but it doesn't appear that datasets versions have changed since Dec. '23.
### Environment info
datasets 2.16.1
transformers 4.35.2
pyarrow 15.0.0
pyarrow-hotfix 0.6
torch 2.0.1
I'm not using the latest transformers version because there was an error due to a conflict with Azure mlflow when I tried the last time. | 2024-02-21T00:34:41Z | 6,650 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-02-08T17:11:26Z | https://api.github.com/repos/huggingface/datasets/issues/6650/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6650/timeline | AttributeError: 'InMemoryTable' object has no attribute '_batches' | https://api.github.com/repos/huggingface/datasets/issues/6650/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/13874772?v=4",
"events_url": "https://api.github.com/users/matsuobasho/events{/privacy}",
"followers_url": "https://api.github.com/users/matsuobasho/followers",
"following_url": "https://api.github.com/users/matsuobasho/following{/other_user}",
"gists_url": "https://api.github.com/users/matsuobasho/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/matsuobasho",
"id": 13874772,
"login": "matsuobasho",
"node_id": "MDQ6VXNlcjEzODc0Nzcy",
"organizations_url": "https://api.github.com/users/matsuobasho/orgs",
"received_events_url": "https://api.github.com/users/matsuobasho/received_events",
"repos_url": "https://api.github.com/users/matsuobasho/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/matsuobasho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matsuobasho/subscriptions",
"type": "User",
"url": "https://api.github.com/users/matsuobasho"
} | [] | null | null | NONE | null | null | I_kwDODunzps5-s1Ff | [
"Hi! Does running the following code also return the same error on your machine? \r\n\r\n```python\r\nimport copy\r\nimport pyarrow as pa\r\nfrom datasets.table import InMemoryTable\r\n\r\ncopy.deepcopy(InMemoryTable(pa.table({\"a\": [1, 2, 3], \"b\": [\"foo\", \"bar\", \"foobar\"]})))\r\n```",
"No, it doesn't, it runs fine. But what's really strange is that the error just went away after I reran the data prep script for conversion from csv to a datasets object. I realize that's not very helpful since the problem isn't reproducible. ",
"Feel free to close the issue then :)."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6650/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6650 | https://github.com/huggingface/datasets/issues/6650 | false |
2,124,940,213 | https://api.github.com/repos/huggingface/datasets/issues/6649/labels{/name} | just added torch.no_grad and eval() | 2024-02-08T11:23:35Z | 6,649 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-08T11:17:24Z | https://api.github.com/repos/huggingface/datasets/issues/6649/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6649/timeline | Minor multi gpu doc improvement | https://api.github.com/repos/huggingface/datasets/issues/6649/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2024-02-08T11:17:35Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6649.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6649",
"merged_at": "2024-02-08T11:17:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6649.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6649"
} | PR_kwDODunzps5mXRo8 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6649). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005197 / 0.011353 (-0.006156) | 0.003469 / 0.011008 (-0.007539) | 0.062306 / 0.038508 (0.023798) | 0.028417 / 0.023109 (0.005308) | 0.241147 / 0.275898 (-0.034751) | 0.270910 / 0.323480 (-0.052569) | 0.003053 / 0.007986 (-0.004933) | 0.003343 / 0.004328 (-0.000985) | 0.048044 / 0.004250 (0.043794) | 0.043738 / 0.037052 (0.006686) | 0.259274 / 0.258489 (0.000785) | 0.282522 / 0.293841 (-0.011319) | 0.027807 / 0.128546 (-0.100739) | 0.010413 / 0.075646 (-0.065234) | 0.206322 / 0.419271 (-0.212950) | 0.035770 / 0.043533 (-0.007763) | 0.243465 / 0.255139 (-0.011674) | 0.261596 / 0.283200 (-0.021604) | 0.018613 / 0.141683 (-0.123070) | 1.115509 / 1.452155 (-0.336645) | 1.189403 / 1.492716 (-0.303314) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.086075 / 0.018006 (0.068069) | 0.296140 / 0.000490 (0.295650) | 0.000198 / 0.000200 (-0.000002) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018238 / 0.037411 (-0.019173) | 0.061783 / 0.014526 (0.047257) | 0.072014 / 0.176557 (-0.104543) | 0.118746 / 0.737135 (-0.618389) | 0.073279 / 0.296338 (-0.223060) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278281 / 0.215209 (0.063072) | 2.772209 / 2.077655 (0.694555) | 1.404503 / 1.504120 (-0.099617) | 1.274753 / 1.541195 (-0.266441) | 1.304394 / 1.468490 (-0.164096) | 0.556903 / 4.584777 (-4.027874) | 2.335428 / 3.745712 (-1.410284) | 2.712255 / 5.269862 (-2.557606) | 1.722252 / 4.565676 (-2.843425) | 0.061268 / 0.424275 (-0.363007) | 0.005029 / 0.007607 (-0.002578) | 0.326112 / 0.226044 (0.100067) | 3.207917 / 2.268929 (0.938988) | 1.743513 / 55.444624 (-53.701111) | 1.476418 / 6.876477 (-5.400059) | 1.489776 / 2.142072 (-0.652297) | 0.628181 / 4.805227 (-4.177046) | 0.115959 / 6.500664 (-6.384706) | 0.041854 / 0.075469 (-0.033615) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969039 / 1.841788 (-0.872749) | 11.178646 / 8.074308 (3.104338) | 9.639716 / 10.191392 (-0.551676) | 0.139750 / 0.680424 (-0.540674) | 0.014230 / 0.534201 (-0.519971) | 0.285318 / 0.579283 (-0.293965) | 0.260788 / 0.434364 (-0.173576) | 0.324183 / 0.540337 (-0.216154) | 0.416326 / 1.386936 (-0.970610) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005149 / 0.011353 (-0.006204) | 0.003469 / 0.011008 (-0.007539) | 0.049761 / 0.038508 (0.011253) | 0.030723 / 0.023109 (0.007614) | 0.271562 / 0.275898 (-0.004336) | 0.297843 / 0.323480 (-0.025637) | 0.004296 / 0.007986 (-0.003690) | 0.002704 / 0.004328 (-0.001624) | 0.048890 / 0.004250 (0.044640) | 0.044776 / 0.037052 (0.007723) | 0.285490 / 0.258489 (0.027001) | 0.312888 / 0.293841 (0.019047) | 0.046239 / 0.128546 (-0.082307) | 0.010238 / 0.075646 (-0.065408) | 0.057968 / 0.419271 (-0.361304) | 0.033295 / 0.043533 (-0.010238) | 0.274320 / 0.255139 (0.019181) | 0.296199 / 0.283200 (0.012999) | 0.017856 / 0.141683 (-0.123827) | 1.147532 / 1.452155 (-0.304622) | 1.211647 / 1.492716 (-0.281070) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089655 / 0.018006 (0.071649) | 0.297275 / 0.000490 (0.296785) | 0.000207 / 0.000200 (0.000007) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021739 / 0.037411 (-0.015672) | 0.075041 / 0.014526 (0.060515) | 0.085754 / 0.176557 (-0.090802) | 0.124512 / 0.737135 (-0.612623) | 0.086926 / 0.296338 (-0.209412) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290306 / 0.215209 (0.075097) | 2.847404 / 2.077655 (0.769749) | 1.606175 / 1.504120 (0.102055) | 1.483220 / 1.541195 (-0.057974) | 1.514551 / 1.468490 (0.046061) | 0.559332 / 4.584777 (-4.025445) | 2.403089 / 3.745712 (-1.342624) | 2.715179 / 5.269862 (-2.554683) | 1.688340 / 4.565676 (-2.877337) | 0.062057 / 0.424275 (-0.362218) | 0.004955 / 0.007607 (-0.002652) | 0.338909 / 0.226044 (0.112865) | 3.356882 / 2.268929 (1.087954) | 1.942259 / 55.444624 (-53.502366) | 1.675195 / 6.876477 (-5.201282) | 1.688158 / 2.142072 (-0.453914) | 0.637270 / 4.805227 (-4.167957) | 0.114314 / 6.500664 (-6.386350) | 0.040677 / 0.075469 (-0.034792) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.022126 / 1.841788 (-0.819661) | 11.783359 / 8.074308 (3.709051) | 10.247652 / 10.191392 (0.056260) | 0.138188 / 0.680424 (-0.542236) | 0.014850 / 0.534201 (-0.519351) | 0.287414 / 0.579283 (-0.291869) | 0.274393 / 0.434364 (-0.159971) | 0.327255 / 0.540337 (-0.213082) | 0.416355 / 1.386936 (-0.970581) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#727a952367966a98b759d54f333b1e2c28cfd4d4 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6649/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6649 | https://github.com/huggingface/datasets/pull/6649 | true |
2,124,813,589 | https://api.github.com/repos/huggingface/datasets/issues/6648/labels{/name} | (basically the same content as the hfh upload docs, but adapted for datasets) | 2024-02-08T13:57:41Z | 6,648 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-08T10:24:56Z | https://api.github.com/repos/huggingface/datasets/issues/6648/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6648/timeline | Document usage of hfh cli instead of git | https://api.github.com/repos/huggingface/datasets/issues/6648/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2024-02-08T13:51:39Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6648.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6648",
"merged_at": "2024-02-08T13:51:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6648.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6648"
} | PR_kwDODunzps5mW1MA | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6648). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004951 / 0.011353 (-0.006402) | 0.003187 / 0.011008 (-0.007821) | 0.062959 / 0.038508 (0.024451) | 0.028037 / 0.023109 (0.004928) | 0.241374 / 0.275898 (-0.034524) | 0.262792 / 0.323480 (-0.060688) | 0.004132 / 0.007986 (-0.003854) | 0.002766 / 0.004328 (-0.001563) | 0.051416 / 0.004250 (0.047165) | 0.040957 / 0.037052 (0.003904) | 0.260760 / 0.258489 (0.002271) | 0.282018 / 0.293841 (-0.011823) | 0.027689 / 0.128546 (-0.100857) | 0.010433 / 0.075646 (-0.065214) | 0.211598 / 0.419271 (-0.207674) | 0.035447 / 0.043533 (-0.008086) | 0.244333 / 0.255139 (-0.010806) | 0.263192 / 0.283200 (-0.020008) | 0.016816 / 0.141683 (-0.124867) | 1.103188 / 1.452155 (-0.348967) | 1.179093 / 1.492716 (-0.313623) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092412 / 0.018006 (0.074406) | 0.301226 / 0.000490 (0.300736) | 0.000208 / 0.000200 (0.000008) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018146 / 0.037411 (-0.019265) | 0.061447 / 0.014526 (0.046921) | 0.072162 / 0.176557 (-0.104394) | 0.118965 / 0.737135 (-0.618170) | 0.073756 / 0.296338 (-0.222583) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285361 / 0.215209 (0.070152) | 2.776928 / 2.077655 (0.699273) | 1.506859 / 1.504120 (0.002739) | 1.379119 / 1.541195 (-0.162075) | 1.401798 / 1.468490 (-0.066692) | 0.572512 / 4.584777 (-4.012265) | 2.403793 / 3.745712 (-1.341919) | 2.740496 / 5.269862 (-2.529366) | 1.714611 / 4.565676 (-2.851065) | 0.063496 / 0.424275 (-0.360780) | 0.005009 / 0.007607 (-0.002598) | 0.342438 / 0.226044 (0.116393) | 3.368129 / 2.268929 (1.099200) | 1.831200 / 55.444624 (-53.613424) | 1.553611 / 6.876477 (-5.322866) | 1.578116 / 2.142072 (-0.563956) | 0.653034 / 4.805227 (-4.152193) | 0.117724 / 6.500664 (-6.382940) | 0.041188 / 0.075469 (-0.034282) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972520 / 1.841788 (-0.869268) | 11.186297 / 8.074308 (3.111989) | 9.485829 / 10.191392 (-0.705563) | 0.139715 / 0.680424 (-0.540708) | 0.013705 / 0.534201 (-0.520496) | 0.287384 / 0.579283 (-0.291899) | 0.266784 / 0.434364 (-0.167580) | 0.320789 / 0.540337 (-0.219548) | 0.417484 / 1.386936 (-0.969452) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005570 / 0.011353 (-0.005783) | 0.003416 / 0.011008 (-0.007592) | 0.051160 / 0.038508 (0.012652) | 0.031082 / 0.023109 (0.007973) | 0.279336 / 0.275898 (0.003438) | 0.300529 / 0.323480 (-0.022951) | 0.004320 / 0.007986 (-0.003666) | 0.002781 / 0.004328 (-0.001548) | 0.049642 / 0.004250 (0.045391) | 0.044379 / 0.037052 (0.007327) | 0.293797 / 0.258489 (0.035308) | 0.317844 / 0.293841 (0.024003) | 0.049697 / 0.128546 (-0.078849) | 0.010624 / 0.075646 (-0.065023) | 0.058834 / 0.419271 (-0.360437) | 0.033869 / 0.043533 (-0.009664) | 0.280547 / 0.255139 (0.025408) | 0.300685 / 0.283200 (0.017486) | 0.017010 / 0.141683 (-0.124673) | 1.172277 / 1.452155 (-0.279878) | 1.205359 / 1.492716 (-0.287358) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092914 / 0.018006 (0.074907) | 0.303561 / 0.000490 (0.303071) | 0.000219 / 0.000200 (0.000019) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022379 / 0.037411 (-0.015032) | 0.075460 / 0.014526 (0.060934) | 0.085795 / 0.176557 (-0.090762) | 0.124776 / 0.737135 (-0.612360) | 0.088260 / 0.296338 (-0.208079) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302873 / 0.215209 (0.087664) | 2.936173 / 2.077655 (0.858519) | 1.589251 / 1.504120 (0.085131) | 1.477552 / 1.541195 (-0.063643) | 1.479322 / 1.468490 (0.010832) | 0.570481 / 4.584777 (-4.014296) | 2.434137 / 3.745712 (-1.311575) | 2.774012 / 5.269862 (-2.495849) | 1.718103 / 4.565676 (-2.847574) | 0.061951 / 0.424275 (-0.362324) | 0.004992 / 0.007607 (-0.002615) | 0.352250 / 0.226044 (0.126205) | 3.457417 / 2.268929 (1.188488) | 1.934587 / 55.444624 (-53.510037) | 1.646904 / 6.876477 (-5.229573) | 1.669429 / 2.142072 (-0.472643) | 0.649665 / 4.805227 (-4.155562) | 0.116630 / 6.500664 (-6.384034) | 0.040669 / 0.075469 (-0.034800) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.011488 / 1.841788 (-0.830300) | 11.866394 / 8.074308 (3.792086) | 10.144588 / 10.191392 (-0.046804) | 0.129931 / 0.680424 (-0.550493) | 0.014885 / 0.534201 (-0.519316) | 0.287463 / 0.579283 (-0.291821) | 0.280754 / 0.434364 (-0.153610) | 0.330139 / 0.540337 (-0.210199) | 0.414653 / 1.386936 (-0.972283) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#585275b8deaebd1bdcbd3725fa63172395791c73 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6648/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6648 | https://github.com/huggingface/datasets/pull/6648 | true |
2,123,397,569 | https://api.github.com/repos/huggingface/datasets/issues/6647/labels{/name} | * A small update to the documentation, noting the ability to load jsonl files. | 2024-02-08T15:34:17Z | 6,647 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-07T16:18:08Z | https://api.github.com/repos/huggingface/datasets/issues/6647/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6647/timeline | Update loading.mdx to include "jsonl" file loading. | https://api.github.com/repos/huggingface/datasets/issues/6647/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/22236370?v=4",
"events_url": "https://api.github.com/users/mosheber/events{/privacy}",
"followers_url": "https://api.github.com/users/mosheber/followers",
"following_url": "https://api.github.com/users/mosheber/following{/other_user}",
"gists_url": "https://api.github.com/users/mosheber/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mosheber",
"id": 22236370,
"login": "mosheber",
"node_id": "MDQ6VXNlcjIyMjM2Mzcw",
"organizations_url": "https://api.github.com/users/mosheber/orgs",
"received_events_url": "https://api.github.com/users/mosheber/received_events",
"repos_url": "https://api.github.com/users/mosheber/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mosheber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mosheber/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mosheber"
} | [] | null | null | NONE | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6647.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6647",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6647.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6647"
} | PR_kwDODunzps5mSB2B | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6647). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> Thanks for adding the explicit loading command.\r\n> \r\n> However, I would move it just below, where we present the JSON-Lines example.\r\n> \r\n> * Maybe adding that this format is called JSON-Lines\r\n> * Add the example after the JSON-Lines data example\r\n> \r\n> https://github.com/huggingface/datasets/blob/14d9afbb7ae1b787c450261ca0ff374551993031/docs/source/loading.mdx#L135-L138\r\n\r\nThank you @albertvillanova for the feedback! I moved the jsonl file loading example to a more appropriate location. "
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6647/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6647 | https://github.com/huggingface/datasets/pull/6647 | true |
2,123,134,128 | https://api.github.com/repos/huggingface/datasets/issues/6646/labels{/name} | Use Qwen1.5-0.5B-Chat as an easy example for multi-GPU
the previous example was using a model for translation and the way it was setup was not really the right way to use the model. | 2024-02-09T17:43:32Z | 6,646 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-07T14:15:01Z | https://api.github.com/repos/huggingface/datasets/issues/6646/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6646/timeline | Better multi-gpu example | https://api.github.com/repos/huggingface/datasets/issues/6646/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2024-02-07T14:59:11Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6646.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6646",
"merged_at": "2024-02-07T14:59:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6646.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6646"
} | PR_kwDODunzps5mRIma | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6646). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005598 / 0.011353 (-0.005755) | 0.003640 / 0.011008 (-0.007369) | 0.064557 / 0.038508 (0.026049) | 0.029645 / 0.023109 (0.006536) | 0.243695 / 0.275898 (-0.032203) | 0.261252 / 0.323480 (-0.062228) | 0.004067 / 0.007986 (-0.003919) | 0.002883 / 0.004328 (-0.001446) | 0.049192 / 0.004250 (0.044942) | 0.045299 / 0.037052 (0.008246) | 0.273207 / 0.258489 (0.014718) | 0.288668 / 0.293841 (-0.005173) | 0.028114 / 0.128546 (-0.100432) | 0.010597 / 0.075646 (-0.065049) | 0.215345 / 0.419271 (-0.203927) | 0.036119 / 0.043533 (-0.007414) | 0.243718 / 0.255139 (-0.011421) | 0.266657 / 0.283200 (-0.016543) | 0.018176 / 0.141683 (-0.123507) | 1.127926 / 1.452155 (-0.324229) | 1.168066 / 1.492716 (-0.324650) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096001 / 0.018006 (0.077994) | 0.304317 / 0.000490 (0.303828) | 0.000209 / 0.000200 (0.000009) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018241 / 0.037411 (-0.019170) | 0.061505 / 0.014526 (0.046979) | 0.072456 / 0.176557 (-0.104101) | 0.118315 / 0.737135 (-0.618821) | 0.075154 / 0.296338 (-0.221184) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278748 / 0.215209 (0.063538) | 2.729923 / 2.077655 (0.652268) | 1.416835 / 1.504120 (-0.087285) | 1.294016 / 1.541195 (-0.247179) | 1.323249 / 1.468490 (-0.145241) | 0.575389 / 4.584777 (-4.009388) | 2.404923 / 3.745712 (-1.340789) | 2.769233 / 5.269862 (-2.500629) | 1.742340 / 4.565676 (-2.823336) | 0.062664 / 0.424275 (-0.361611) | 0.004951 / 0.007607 (-0.002656) | 0.335024 / 0.226044 (0.108979) | 3.291446 / 2.268929 (1.022518) | 1.797095 / 55.444624 (-53.647530) | 1.532963 / 6.876477 (-5.343513) | 1.529315 / 2.142072 (-0.612758) | 0.654922 / 4.805227 (-4.150305) | 0.118772 / 6.500664 (-6.381892) | 0.042034 / 0.075469 (-0.033435) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983646 / 1.841788 (-0.858141) | 11.518625 / 8.074308 (3.444317) | 9.538781 / 10.191392 (-0.652611) | 0.140300 / 0.680424 (-0.540124) | 0.013966 / 0.534201 (-0.520235) | 0.287071 / 0.579283 (-0.292212) | 0.270201 / 0.434364 (-0.164163) | 0.323294 / 0.540337 (-0.217044) | 0.418130 / 1.386936 (-0.968806) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005508 / 0.011353 (-0.005844) | 0.003714 / 0.011008 (-0.007294) | 0.050031 / 0.038508 (0.011523) | 0.031866 / 0.023109 (0.008756) | 0.272248 / 0.275898 (-0.003650) | 0.295105 / 0.323480 (-0.028375) | 0.005179 / 0.007986 (-0.002807) | 0.002820 / 0.004328 (-0.001508) | 0.048896 / 0.004250 (0.044646) | 0.045975 / 0.037052 (0.008922) | 0.287662 / 0.258489 (0.029173) | 0.321139 / 0.293841 (0.027298) | 0.049242 / 0.128546 (-0.079304) | 0.010732 / 0.075646 (-0.064914) | 0.057943 / 0.419271 (-0.361328) | 0.033527 / 0.043533 (-0.010006) | 0.271746 / 0.255139 (0.016607) | 0.291404 / 0.283200 (0.008204) | 0.019351 / 0.141683 (-0.122332) | 1.157221 / 1.452155 (-0.294934) | 1.215757 / 1.492716 (-0.276959) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096950 / 0.018006 (0.078944) | 0.312002 / 0.000490 (0.311512) | 0.000223 / 0.000200 (0.000023) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022288 / 0.037411 (-0.015123) | 0.075282 / 0.014526 (0.060756) | 0.087445 / 0.176557 (-0.089112) | 0.125617 / 0.737135 (-0.611519) | 0.088878 / 0.296338 (-0.207460) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291961 / 0.215209 (0.076752) | 2.881445 / 2.077655 (0.803790) | 1.586128 / 1.504120 (0.082008) | 1.458636 / 1.541195 (-0.082558) | 1.487001 / 1.468490 (0.018511) | 0.575466 / 4.584777 (-4.009311) | 2.454941 / 3.745712 (-1.290771) | 2.878077 / 5.269862 (-2.391785) | 1.787215 / 4.565676 (-2.778462) | 0.064010 / 0.424275 (-0.360265) | 0.005092 / 0.007607 (-0.002516) | 0.360500 / 0.226044 (0.134455) | 3.465574 / 2.268929 (1.196646) | 1.957516 / 55.444624 (-53.487108) | 1.666282 / 6.876477 (-5.210195) | 1.690070 / 2.142072 (-0.452002) | 0.661323 / 4.805227 (-4.143905) | 0.117824 / 6.500664 (-6.382840) | 0.042286 / 0.075469 (-0.033183) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.026517 / 1.841788 (-0.815270) | 12.083347 / 8.074308 (4.009039) | 10.269319 / 10.191392 (0.077927) | 0.139253 / 0.680424 (-0.541171) | 0.016258 / 0.534201 (-0.517943) | 0.290583 / 0.579283 (-0.288700) | 0.284338 / 0.434364 (-0.150026) | 0.335865 / 0.540337 (-0.204473) | 0.416600 / 1.386936 (-0.970336) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ba3cfad91e9366cda0ba203700fc745d8bcd1f17 \"CML watermark\")\n",
"Thanks, I was needing this example today <3 "
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6646/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6646 | https://github.com/huggingface/datasets/pull/6646 | true |
2,122,956,818 | https://api.github.com/repos/huggingface/datasets/issues/6645/labels{/name} | Support fsspec 2024.2.
First, we should address:
- #6644 | 2024-02-29T15:12:19Z | 6,645 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-02-07T12:45:29Z | https://api.github.com/repos/huggingface/datasets/issues/6645/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6645/timeline | Support fsspec 2024.2 | https://api.github.com/repos/huggingface/datasets/issues/6645/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | completed | MEMBER | 2024-02-29T15:12:19Z | null | I_kwDODunzps5-icAS | [
"I'd be very grateful. This upper bound banished me straight into dependency hell today. :("
] | {
"+1": 8,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 8,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6645/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6645 | https://github.com/huggingface/datasets/issues/6645 | false |
2,122,955,282 | https://api.github.com/repos/huggingface/datasets/issues/6644/labels{/name} | Support fsspec 2023.12 by handling previous and new glob behavior. | 2024-02-29T15:12:18Z | 6,644 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-02-07T12:44:39Z | https://api.github.com/repos/huggingface/datasets/issues/6644/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6644/timeline | Support fsspec 2023.12 | https://api.github.com/repos/huggingface/datasets/issues/6644/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | completed | MEMBER | 2024-02-29T15:12:18Z | null | I_kwDODunzps5-iboS | [
"The pinned fsspec version range dependency conflict has been affecting several of our users in https://github.com/iterative/dvc. I've opened an initial PR that I think should resolve the glob behavior changes with using datasets + the latest fsspec release.\r\n\r\nPlease let us know if there's any other fsspec related behavior in datasets that needs to be updated to get 2024.2 supported, we'd like to get this conflict resolved as quickly as possible and we're willing to contribute any additional work that's required here.\r\n\r\ncc @dberenbaum"
] | {
"+1": 6,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6644/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6644 | https://github.com/huggingface/datasets/issues/6644 | false |
2,121,239,039 | https://api.github.com/repos/huggingface/datasets/issues/6643/labels{/name} | ### Describe the bug
I am working on a retrieval project and encountering I have encountered two issues in the hugging face faiss integration:
1. I am trying to pass in a dataset with a faiss index to the Huggingface trainer. The code works for a cpu faiss index, but doesn't for a gpu one, getting error:
```
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 1543, in train
return inner_training_loop(
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 1555, in _inner_training_loop
train_dataloader = self.get_train_dataloader()
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 831, in get_train_dataloader
train_dataset = self._remove_unused_columns(train_dataset, description="training")
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 725, in _remove_unused_columns
return dataset.remove_columns(ignored_columns)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 592, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/fingerprint.py", line 481, in wrapper
out = func(dataset, *args, **kwargs)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2146, in remove_columns
dataset = copy.deepcopy(self)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 271, in _reconstruct
state = deepcopy(state, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 271, in _reconstruct
state = deepcopy(state, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 161, in deepcopy
rv = reductor(4)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/faiss/__init__.py", line 556, in index_getstate
return {"this": serialize_index(self).tobytes()}
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/faiss/__init__.py", line 1607, in serialize_index
write_index(index, writer)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/faiss/swigfaiss.py", line 9843, in write_index
return _swigfaiss.write_index(*args)
RuntimeError: Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /project/faiss/faiss/impl/index_write.cpp:590: don't know how to serialize this type of index
```
The index was created with the add_faiss_index method
```
train_dataset.add_faiss_index(
column='embeddings',
index_name='embeddings',
string_factory=faiss_index_string,
train_size=config.faiss_train_size,
device=0, # Use -1 for CPU, or specify GPU device ID
faiss_verbose=True
)
```
2. Athough faiss is written to be compatible on the gpu for searching [https://github.com/facebookresearch/faiss/wiki/Faiss-on-the-GPU](https://github.com/facebookresearch/faiss/wiki/Faiss-on-the-GPU) I am getting error when trying to use the hugggingface code to do the search on gpu. This seems to be caused by this line https://github.com/huggingface/datasets/blob/f9975f636542df7f95c27065ea93147440d690b7/src/datasets/search.py#L376 producing error
```
total_scores, total_examples = self.dataset.get_nearest_examples_batch('embeddings', embeddings, k=self.k)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/search.py", line 773, in get_nearest_examples_batch
total_scores, total_indices = self.search_batch(index_name, queries, k, **kwargs)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/search.py", line 727, in search_batch
return self._indexes[index_name].search_batch(queries, k, **kwargs)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/search.py", line 376, in search_batch
if not queries.flags.c_contiguous:
AttributeError: 'Tensor' object has no attribute 'flags'
```
### Steps to reproduce the bug
```
train_dataset.add_faiss_index(
column='embeddings',
index_name='embeddings',
string_factory=faiss_index_string,
train_size=config.faiss_train_size,
device=0, # Use -1 for CPU, or specify GPU device ID
faiss_verbose=True
)
Trainer(
model=model,
args=args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
data_collator=data_collator,
tokenizer=tokenizer
)
train_dataset.get_nearest_examples_batch('embeddings', embeddings, k=self.k)
```
### Expected behavior
I would expect the faiss database code to be gpu compatible
### Environment info
huggingface Version: 2.16.1 | 2024-02-15T10:29:32Z | 6,643 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-02-06T16:41:00Z | https://api.github.com/repos/huggingface/datasets/issues/6643/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6643/timeline | Faiss GPU index cannot be serialised when passed to trainer | https://api.github.com/repos/huggingface/datasets/issues/6643/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/56388976?v=4",
"events_url": "https://api.github.com/users/rubenweitzman/events{/privacy}",
"followers_url": "https://api.github.com/users/rubenweitzman/followers",
"following_url": "https://api.github.com/users/rubenweitzman/following{/other_user}",
"gists_url": "https://api.github.com/users/rubenweitzman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rubenweitzman",
"id": 56388976,
"login": "rubenweitzman",
"node_id": "MDQ6VXNlcjU2Mzg4OTc2",
"organizations_url": "https://api.github.com/users/rubenweitzman/orgs",
"received_events_url": "https://api.github.com/users/rubenweitzman/received_events",
"repos_url": "https://api.github.com/users/rubenweitzman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rubenweitzman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rubenweitzman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rubenweitzman"
} | [] | null | null | NONE | null | null | I_kwDODunzps5-b4n_ | [
"Hi ! make sure your query embeddings are numpy arrays, not torch tensors ;)",
"Hi Quentin, not sure how that solves the problem number 1. I am trying to pass on a dataset with a faiss gpu for training to the standard trainer but getting this serialisation error. What is a workaround this? I do not want to remove the faiss index, as I would want to use it to create batches of retrieved samples from the dataset. \r\nThanks in advance for your help!",
"Issue number one seems to be an issue with FAISS indexes not being compatible with copy.deepcopy.\r\n\r\nMaybe you try to not remove the columns, e.g. by passing `remove_unused_columns=False`"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6643/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6643 | https://github.com/huggingface/datasets/issues/6643 | false |
2,119,085,766 | https://api.github.com/repos/huggingface/datasets/issues/6642/labels{/name} | ### Describe the bug
Differently sized object is saved than it is loaded.
### Steps to reproduce the bug
Hi, I save dataset in a following way:
```
dataset = load_dataset("json",
data_files={
"train": os.path.join(input_folder, f"{task_meta_type}_{task_type}_train.jsonl"),
"test": os.path.join(input_folder, f"{task_meta_type}_{task_type}_test.jsonl")})
print(os.path.join(output_folder, f"{task_meta_type}_{task_type}"))
print(f"Length of train dataset: {len(dataset['train'])}")
print(f"Length of test dataset: {len(dataset['test'])}")
dataset.save_to_disk(os.path.join(output_folder, f"{task_meta_type}_{task_type}"))
```
this yields output
```
.data/hf_dataset/propaganda_zanr
Length of train dataset: 7642
Length of test dataset: 1000
```
Everything looks fine.
Then I load the dataset
```python
from datasets import load_dataset
dataset_path = ".data/hf_dataset/propaganda_zanr"
dataset = load_dataset(dataset_path)
print(f"Length of train dataset: {len(dataset['train'])}")
print(f"Length of test dataset: {len(dataset['test'])}")
```
this prints
```
Generating train split: 1 examples [00:00, 72.10 examples/s]
Generating test split: 1 examples [00:00, 100.69 examples/s]
Length of train dataset: 1
Length of test dataset: 1
```
I dont' understand :(
### Expected behavior
same object is loaded
### Environment info
datasets==2.16.1 | 2024-02-06T09:50:19Z | 6,642 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-02-05T17:28:57Z | https://api.github.com/repos/huggingface/datasets/issues/6642/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6642/timeline | Differently dataset object saved than it is loaded. | https://api.github.com/repos/huggingface/datasets/issues/6642/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/31218150?v=4",
"events_url": "https://api.github.com/users/MFajcik/events{/privacy}",
"followers_url": "https://api.github.com/users/MFajcik/followers",
"following_url": "https://api.github.com/users/MFajcik/following{/other_user}",
"gists_url": "https://api.github.com/users/MFajcik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MFajcik",
"id": 31218150,
"login": "MFajcik",
"node_id": "MDQ6VXNlcjMxMjE4MTUw",
"organizations_url": "https://api.github.com/users/MFajcik/orgs",
"received_events_url": "https://api.github.com/users/MFajcik/received_events",
"repos_url": "https://api.github.com/users/MFajcik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MFajcik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MFajcik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MFajcik"
} | [] | null | completed | NONE | 2024-02-06T09:50:19Z | null | I_kwDODunzps5-Tq7G | [
"I see now, that I have to use `load_from_disk`, in order to load dataset properly, not `load_dataset`. Why is this behavior split? Why do we need both, `load_dataset` and `load_from_disk`?\r\n\r\nUnless answered, I believe this might be helpful for other hf datasets newbies.\r\n\r\nAnyway, made a `load_dataset` compatible dataset in a following way. I created a directory, and just copied jsonl there as `train.jsonl/test.jsonl`.\r\n```python\r\noutput_folder = os.path.join(args.output_folder, f\"{task_meta_type}_{task_type}\")\r\nos.makedirs(output_folder, exist_ok=True)\r\nfile = f\"{task_meta_type}_{task_type}_train.jsonl\"\r\nshutil.copy(os.path.join(input_folder, file),\r\n os.path.join(output_folder, \"train.jsonl\"))\r\n# now test\r\nfile = f\"{task_meta_type}_{task_type}_test.jsonl\"\r\nshutil.copy(os.path.join(input_folder, file),\r\n os.path.join(output_folder, \"test.jsonl\"))\r\n```\r\n",
"Hi @MFajcik, \r\n\r\nYou can find information about save_to_disk/load_from_disk in our docs:\r\n- https://huggingface.co/docs/datasets/v2.16.1/en/process#save\r\n- https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/main_classes#datasets.Dataset.save_to_disk\r\n- https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/main_classes#datasets.Dataset.load_from_disk"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6642/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6642 | https://github.com/huggingface/datasets/issues/6642 | false |
2,116,963,132 | https://api.github.com/repos/huggingface/datasets/issues/6641/labels{/name} | ### Describe the bug
unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte
### Steps to reproduce the bug
```
import sys
sys.getdefaultencoding()
'utf-8'
from datasets import load_dataset
print(f"Train dataset size: {len(dataset['train'])}")
print(f"Test dataset size: {len(dataset['test'])}")
Resolving data files: 100%
159/159 [00:00<00:00, 9909.28it/s]
Using custom data configuration samsum-0b1209637541c9e6
Downloading and preparing dataset json/samsum to C:/Users/Administrator/.cache/huggingface/datasets/json/samsum-0b1209637541c9e6/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...
Downloading data files: 100%
3/3 [00:00<00:00, 119.99it/s]
Extracting data files: 100%
3/3 [00:00<00:00, 9.54it/s]
Generating train split:
88392/0 [00:15<00:00, 86848.17 examples/s]
Generating test split:
0/0 [00:00<?, ? examples/s]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\packaged_modules\json\json.py:132, in Json._generate_tables(self, files)
131 try:
--> 132 pa_table = paj.read_json(
133 io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)
134 )
135 break
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pyarrow\_json.pyx:290, in pyarrow._json.read_json()
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pyarrow\error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pyarrow\error.pxi:100, in pyarrow.lib.check_status()
ArrowInvalid: JSON parse error: Invalid value. in row 0
During handling of the above exception, another exception occurred:
UnicodeDecodeError Traceback (most recent call last)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:1819, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1818 _time = time.time()
-> 1819 for _, table in generator:
1820 if max_shard_size is not None and writer._num_bytes > max_shard_size:
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\packaged_modules\json\json.py:153, in Json._generate_tables(self, files)
152 with open(file, encoding="utf-8") as f:
--> 153 dataset = json.load(f)
154 except json.JSONDecodeError:
File ~\AppData\Local\Programs\Python\Python310\lib\json\__init__.py:293, in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
276 """Deserialize ``fp`` (a ``.read()``-supporting file-like object containing
277 a JSON document) to a Python object.
278
(...)
291 kwarg; otherwise ``JSONDecoder`` is used.
292 """
--> 293 return loads(fp.read(),
294 cls=cls, object_hook=object_hook,
295 parse_float=parse_float, parse_int=parse_int,
296 parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File ~\AppData\Local\Programs\Python\Python310\lib\codecs.py:322, in BufferedIncrementalDecoder.decode(self, input, final)
321 data = self.buffer + input
--> 322 (result, consumed) = self._buffer_decode(data, self.errors, final)
323 # keep undecoded input until the next call
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
Cell In[81], line 5
1 from datasets import load_dataset
3 # Load dataset from the hub
4 #dataset = load_dataset("json",data_files="C:/Users/Administrator/Desktop/samsum/samsum/data/corpus/train.json",field="data")
----> 5 dataset = load_dataset('json',"samsum")
6 #dataset = load_dataset("samsum")
7 print(f"Train dataset size: {len(dataset['train'])}")
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py:1758, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs)
1755 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1757 # Download and prepare data
-> 1758 builder_instance.download_and_prepare(
1759 download_config=download_config,
1760 download_mode=download_mode,
1761 ignore_verifications=ignore_verifications,
1762 try_from_hf_gcs=try_from_hf_gcs,
1763 num_proc=num_proc,
1764 )
1766 # Build dataset for splits
1767 keep_in_memory = (
1768 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1769 )
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:860, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
858 if num_proc is not None:
859 prepare_split_kwargs["num_proc"] = num_proc
--> 860 self._download_and_prepare(
861 dl_manager=dl_manager,
862 verify_infos=verify_infos,
863 **prepare_split_kwargs,
864 **download_and_prepare_kwargs,
865 )
866 # Sync info
867 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:953, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
949 split_dict.add(split_generator.split_info)
951 try:
952 # Prepare split will record examples associated to the split
--> 953 self._prepare_split(split_generator, **prepare_split_kwargs)
954 except OSError as e:
955 raise OSError(
956 "Cannot find data file. "
957 + (self.manual_download_instructions or "")
958 + "\nOriginal error:\n"
959 + str(e)
960 ) from None
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:1708, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1706 gen_kwargs = split_generator.gen_kwargs
1707 job_id = 0
-> 1708 for job_id, done, content in self._prepare_split_single(
1709 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1710 ):
1711 if done:
1712 result = content
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:1851, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1849 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1850 e = e.__context__
-> 1851 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1853 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
can't load dataset
### Environment info
dataset:samsum
system :win10
gpu:m40 24G | 2024-02-06T09:26:07Z | 6,641 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-02-04T08:49:31Z | https://api.github.com/repos/huggingface/datasets/issues/6641/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/6641/timeline | unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte | https://api.github.com/repos/huggingface/datasets/issues/6641/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/109789057?v=4",
"events_url": "https://api.github.com/users/Hughhuh/events{/privacy}",
"followers_url": "https://api.github.com/users/Hughhuh/followers",
"following_url": "https://api.github.com/users/Hughhuh/following{/other_user}",
"gists_url": "https://api.github.com/users/Hughhuh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Hughhuh",
"id": 109789057,
"login": "Hughhuh",
"node_id": "U_kgDOBos_gQ",
"organizations_url": "https://api.github.com/users/Hughhuh/orgs",
"received_events_url": "https://api.github.com/users/Hughhuh/received_events",
"repos_url": "https://api.github.com/users/Hughhuh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Hughhuh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hughhuh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Hughhuh"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | not_planned | NONE | 2024-02-06T09:11:45Z | null | I_kwDODunzps5-Lks8 | [
"Hi @Hughhuh. \r\n\r\nI have formatted the issue because it was not easily readable. Additionally, the environment info is incomplete: it seems you did not run the proposed CLI command `datasets-cli env` and essential information is missing: version of `datasets`, version of `pyarrow`,...\r\n\r\nWith the information you provided, it seems an issue with the specific \"samsum\" dataset. I'm transferring the issue to the corresponding dataset page: https://huggingface.co/datasets/samsum/discussions/5"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6641/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6641 | https://github.com/huggingface/datasets/issues/6641 | false |
2,115,864,531 | https://api.github.com/repos/huggingface/datasets/issues/6640/labels{/name} | ### Feature request
Currently, there are only several Sign Language labels, I would like to propose adding all the Signed Languages as new labels which are described in this ISO standard: https://www.evertype.com/standards/iso639/sign-language.html
### Motivation
Datasets currently only have labels for several signed languages. There are more signed languages in the world. Furthermore, some signed languages that have a lot of online data cannot be found because of this reason (for instance, German Sign Language, and there is no German Sign Language label on huggingface datasets even though there are a lot of readily available sign language datasets exist for German Sign Language, which are used very frequently in Sign Language Processing papers, and models.)
### Your contribution
I can submit a PR for this as well, adding the ISO codes and languages to the labels in datasets. | 2024-02-02T21:54:51Z | 6,640 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-02-02T21:54:51Z | https://api.github.com/repos/huggingface/datasets/issues/6640/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6640/timeline | Sign Language Support | https://api.github.com/repos/huggingface/datasets/issues/6640/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/6684795?v=4",
"events_url": "https://api.github.com/users/Merterm/events{/privacy}",
"followers_url": "https://api.github.com/users/Merterm/followers",
"following_url": "https://api.github.com/users/Merterm/following{/other_user}",
"gists_url": "https://api.github.com/users/Merterm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Merterm",
"id": 6684795,
"login": "Merterm",
"node_id": "MDQ6VXNlcjY2ODQ3OTU=",
"organizations_url": "https://api.github.com/users/Merterm/orgs",
"received_events_url": "https://api.github.com/users/Merterm/received_events",
"repos_url": "https://api.github.com/users/Merterm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Merterm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Merterm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Merterm"
} | [] | null | null | NONE | null | null | I_kwDODunzps5-HYfT | [] | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6640/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6640 | https://github.com/huggingface/datasets/issues/6640 | false |
2,114,620,200 | https://api.github.com/repos/huggingface/datasets/issues/6639/labels{/name} | A first step towards https://github.com/huggingface/datasets/issues/6529 | 2024-02-06T16:54:22Z | 6,639 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-02-02T10:36:49Z | https://api.github.com/repos/huggingface/datasets/issues/6639/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6639/timeline | Run download_and_prepare if missing splits | https://api.github.com/repos/huggingface/datasets/issues/6639/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6639.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6639",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6639.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6639"
} | PR_kwDODunzps5l0KPG | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6639). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6639/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6639 | https://github.com/huggingface/datasets/pull/6639 | true |