url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.1B
| node_id
stringlengths 18
32
| number
int64 1
3.58k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,642B
| updated_at
int64 1,587B
1,642B
| closed_at
int64 1,587B
1,642B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2874 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2874/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2874/comments | https://api.github.com/repos/huggingface/datasets/issues/2874/events | https://github.com/huggingface/datasets/pull/2874 | 989,685,328 | MDExOlB1bGxSZXF1ZXN0NzI4Mzg2Mjg4 | 2,874 | Support streaming datasets that use pathlib | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I've tried https://github.com/huggingface/datasets/issues/2866 again, and I get the same error.\r\n\r\n```python\r\nimport datasets as ds\r\nds.load_dataset('counter', split=\"train\", streaming=False)\r\n```",
"@severo Issue #2866 is not fully fixed yet: multiple patches need to be implemented for `pathlib`, as that dataset uses quite a lot of `pathlib` functions... 😅 ",
"No worry and no stress, I just wanted to check for that case :) I'm very happy that you're working on issues I'm interested in!"
] | 1,631,000,149,000 | 1,631,039,122,000 | 1,631,014,875,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2874",
"html_url": "https://github.com/huggingface/datasets/pull/2874",
"diff_url": "https://github.com/huggingface/datasets/pull/2874.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2874.patch",
"merged_at": 1631014875000
} | This PR extends the support in streaming mode for datasets that use `pathlib.Path`.
Related to: #2866.
CC: @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2874/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2874/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2873 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2873/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2873/comments | https://api.github.com/repos/huggingface/datasets/issues/2873/events | https://github.com/huggingface/datasets/pull/2873 | 989,587,695 | MDExOlB1bGxSZXF1ZXN0NzI4MzA0MTMw | 2,873 | adding swedish_medical_ner | {
"login": "bwang482",
"id": 6764450,
"node_id": "MDQ6VXNlcjY3NjQ0NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6764450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bwang482",
"html_url": "https://github.com/bwang482",
"followers_url": "https://api.github.com/users/bwang482/followers",
"following_url": "https://api.github.com/users/bwang482/following{/other_user}",
"gists_url": "https://api.github.com/users/bwang482/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bwang482/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bwang482/subscriptions",
"organizations_url": "https://api.github.com/users/bwang482/orgs",
"repos_url": "https://api.github.com/users/bwang482/repos",
"events_url": "https://api.github.com/users/bwang482/events{/privacy}",
"received_events_url": "https://api.github.com/users/bwang482/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi, what's the current status of this request? It says Changes requested, but I can't see what changes?",
"Hi, it looks like this PR includes changes to other files that `swedish_medical_ner`.\r\n\r\nFeel free to remove these changes, or simply create a new PR that only contains the addition of the dataset"
] | 1,630,989,893,000 | 1,631,911,657,000 | 1,631,911,657,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2873",
"html_url": "https://github.com/huggingface/datasets/pull/2873",
"diff_url": "https://github.com/huggingface/datasets/pull/2873.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2873.patch",
"merged_at": null
} | Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
Code refactored | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2873/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2872 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2872/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2872/comments | https://api.github.com/repos/huggingface/datasets/issues/2872/events | https://github.com/huggingface/datasets/pull/2872 | 989,453,069 | MDExOlB1bGxSZXF1ZXN0NzI4MTkzMjkz | 2,872 | adding swedish_medical_ner | {
"login": "bwang482",
"id": 6764450,
"node_id": "MDQ6VXNlcjY3NjQ0NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6764450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bwang482",
"html_url": "https://github.com/bwang482",
"followers_url": "https://api.github.com/users/bwang482/followers",
"following_url": "https://api.github.com/users/bwang482/following{/other_user}",
"gists_url": "https://api.github.com/users/bwang482/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bwang482/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bwang482/subscriptions",
"organizations_url": "https://api.github.com/users/bwang482/orgs",
"repos_url": "https://api.github.com/users/bwang482/repos",
"events_url": "https://api.github.com/users/bwang482/events{/privacy}",
"received_events_url": "https://api.github.com/users/bwang482/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,630,965,652,000 | 1,630,989,392,000 | 1,630,989,392,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2872",
"html_url": "https://github.com/huggingface/datasets/pull/2872",
"diff_url": "https://github.com/huggingface/datasets/pull/2872.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2872.patch",
"merged_at": null
} | Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021" | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2872/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2871 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2871/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2871/comments | https://api.github.com/repos/huggingface/datasets/issues/2871/events | https://github.com/huggingface/datasets/issues/2871 | 989,436,088 | MDU6SXNzdWU5ODk0MzYwODg= | 2,871 | datasets.config.PYARROW_VERSION has no attribute 'major' | {
"login": "bwang482",
"id": 6764450,
"node_id": "MDQ6VXNlcjY3NjQ0NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6764450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bwang482",
"html_url": "https://github.com/bwang482",
"followers_url": "https://api.github.com/users/bwang482/followers",
"following_url": "https://api.github.com/users/bwang482/following{/other_user}",
"gists_url": "https://api.github.com/users/bwang482/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bwang482/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bwang482/subscriptions",
"organizations_url": "https://api.github.com/users/bwang482/orgs",
"repos_url": "https://api.github.com/users/bwang482/repos",
"events_url": "https://api.github.com/users/bwang482/events{/privacy}",
"received_events_url": "https://api.github.com/users/bwang482/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"I have changed line 288 to `if int(datasets.config.PYARROW_VERSION.split(\".\")[0]) < 3:` just to get around it.",
"Hi @bwang482,\r\n\r\nI'm sorry but I'm not able to reproduce your bug.\r\n\r\nPlease note that in our current master branch, we made a commit (d03223d4d64b89e76b48b00602aba5aa2f817f1e) that simultaneously modified:\r\n- test_dataset_common.py: https://github.com/huggingface/datasets/commit/d03223d4d64b89e76b48b00602aba5aa2f817f1e#diff-a1bc225bd9a5bade373d1f140e24d09cbbdc97971c2f73bb627daaa803ada002L289 that introduces the usage of `datasets.config.PYARROW_VERSION.major`\r\n- but also changed config.py: https://github.com/huggingface/datasets/commit/d03223d4d64b89e76b48b00602aba5aa2f817f1e#diff-e021fcfc41811fb970fab889b8d245e68382bca8208e63eaafc9a396a336f8f2L40, so that `datasets.config.PYARROW_VERSION.major` exists\r\n",
"Sorted. Thanks!",
"Reopening this. Although the `test_dataset_common.py` script works fine now.\r\n\r\nHas this got something to do with my pull request not passing `ci/circleci: run_dataset_script_tests_pyarrow` tests?\r\n\r\nhttps://github.com/huggingface/datasets/pull/2873",
"Hi @bwang482,\r\n\r\nIf you click on `Details` (on the right of your non passing CI test names: `ci/circleci: run_dataset_script_tests_pyarrow`), you can have more information about the non-passing tests.\r\n\r\nFor example, for [\"ci/circleci: run_dataset_script_tests_pyarrow_1\" details](https://circleci.com/gh/huggingface/datasets/46324?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link), you can see that the only non-passing test has to do with the dataset card (missing information in the `README.md` file): `test_changed_dataset_card`\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_cards.py::test_changed_dataset_card[swedish_medical_ner]\r\n= 1 failed, 3214 passed, 2874 skipped, 2 xfailed, 1 xpassed, 15 warnings in 175.59s (0:02:55) =\r\n```\r\n\r\nTherefore, your PR non-passing test has nothing to do with this issue."
] | 1,630,962,417,000 | 1,631,091,112,000 | 1,631,091,112,000 | CONTRIBUTOR | null | null | null | In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested this on both datasets.__version_=='1.11.0' and '1.9.0'. I am using Mac OS.
```
import datasets
datasets.config.PYARROW_VERSION.major
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/1f/0wqmlgp90qjd5mpj53fnjq440000gn/T/ipykernel_73361/2547517336.py in <module>
1 import datasets
----> 2 datasets.config.PYARROW_VERSION.major
AttributeError: 'str' object has no attribute 'major'
```
## Environment info
- `datasets` version: 1.11.0
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.11
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2871/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2870 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2870/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2870/comments | https://api.github.com/repos/huggingface/datasets/issues/2870/events | https://github.com/huggingface/datasets/pull/2870 | 988,276,859 | MDExOlB1bGxSZXF1ZXN0NzI3MjI4Njk5 | 2,870 | Fix three typos in two files for documentation | {
"login": "leny-mi",
"id": 25124853,
"node_id": "MDQ6VXNlcjI1MTI0ODUz",
"avatar_url": "https://avatars.githubusercontent.com/u/25124853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leny-mi",
"html_url": "https://github.com/leny-mi",
"followers_url": "https://api.github.com/users/leny-mi/followers",
"following_url": "https://api.github.com/users/leny-mi/following{/other_user}",
"gists_url": "https://api.github.com/users/leny-mi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leny-mi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leny-mi/subscriptions",
"organizations_url": "https://api.github.com/users/leny-mi/orgs",
"repos_url": "https://api.github.com/users/leny-mi/repos",
"events_url": "https://api.github.com/users/leny-mi/events{/privacy}",
"received_events_url": "https://api.github.com/users/leny-mi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,630,756,183,000 | 1,630,916,481,000 | 1,630,916,375,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2870",
"html_url": "https://github.com/huggingface/datasets/pull/2870",
"diff_url": "https://github.com/huggingface/datasets/pull/2870.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2870.patch",
"merged_at": 1630916375000
} | Changed "bacth_size" to "batch_size" (2x)
Changed "intsructions" to "instructions" | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2870/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2869 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2869/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2869/comments | https://api.github.com/repos/huggingface/datasets/issues/2869/events | https://github.com/huggingface/datasets/issues/2869 | 987,676,420 | MDU6SXNzdWU5ODc2NzY0MjA= | 2,869 | TypeError: 'NoneType' object is not callable | {
"login": "Chenfei-Kang",
"id": 40911446,
"node_id": "MDQ6VXNlcjQwOTExNDQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/40911446?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Chenfei-Kang",
"html_url": "https://github.com/Chenfei-Kang",
"followers_url": "https://api.github.com/users/Chenfei-Kang/followers",
"following_url": "https://api.github.com/users/Chenfei-Kang/following{/other_user}",
"gists_url": "https://api.github.com/users/Chenfei-Kang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Chenfei-Kang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Chenfei-Kang/subscriptions",
"organizations_url": "https://api.github.com/users/Chenfei-Kang/orgs",
"repos_url": "https://api.github.com/users/Chenfei-Kang/repos",
"events_url": "https://api.github.com/users/Chenfei-Kang/events{/privacy}",
"received_events_url": "https://api.github.com/users/Chenfei-Kang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi, @Chenfei-Kang.\r\n\r\nI'm sorry, but I'm not able to reproduce your bug:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"glue\", 'cola')\r\nds\r\n```\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 8551\r\n })\r\n validation: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1043\r\n })\r\n test: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1063\r\n })\r\n})\r\n```\r\n\r\nCould you please give more details and environment info (platform, PyArrow version)?",
"> Hi, @Chenfei-Kang.\r\n> \r\n> I'm sorry, but I'm not able to reproduce your bug:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> \r\n> ds = load_dataset(\"glue\", 'cola')\r\n> ds\r\n> ```\r\n> \r\n> ```\r\n> DatasetDict({\r\n> train: Dataset({\r\n> features: ['sentence', 'label', 'idx'],\r\n> num_rows: 8551\r\n> })\r\n> validation: Dataset({\r\n> features: ['sentence', 'label', 'idx'],\r\n> num_rows: 1043\r\n> })\r\n> test: Dataset({\r\n> features: ['sentence', 'label', 'idx'],\r\n> num_rows: 1063\r\n> })\r\n> })\r\n> ```\r\n> \r\n> Could you please give more details and environment info (platform, PyArrow version)?\r\n\r\nSorry to reply you so late.\r\nplatform: pycharm 2021 + anaconda with python 3.7\r\nPyArrow version: 5.0.0\r\nhuggingface-hub: 0.0.16\r\ndatasets: 1.9.0\r\n",
"- For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?\r\n- In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste the complete stack trace, so that we know exactly which part of the code threw the error?",
"> * For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?\r\n> * In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste the complete stack trace, so that we know exactly which part of the code threw the error?\r\n\r\n1. For the platform, here are the output:\r\n - datasets` version: 1.11.0\r\n - Platform: Windows-10-10.0.19041-SP0\r\n - Python version: 3.7.10\r\n - PyArrow version: 5.0.0\r\n2. For the code and error:\r\n ```python\r\n from datasets import load_dataset, load_metric\r\n dataset = load_dataset(\"glue\", \"cola\")\r\n ```\r\n ```python\r\n Traceback (most recent call last):\r\n ....\r\n ....\r\n File \"my_file.py\", line 2, in <module>\r\n dataset = load_dataset(\"glue\", \"cola\")\r\n File \"My environments\\lib\\site-packages\\datasets\\load.py\", line 830, in load_dataset\r\n **config_kwargs,\r\n File \"My environments\\lib\\site-packages\\datasets\\load.py\", line 710, in load_dataset_builder\r\n **config_kwargs,\r\n TypeError: 'NoneType' object is not callable\r\n ```\r\n Thank you!",
"For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem.",
"One naive question: do you have internet access from the machine where you execute the code?",
"> For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem.\r\n\r\nBut I can download other task dataset such as `dataset = load_dataset('squad')`. I don't know what went wrong. Thank you so much!"
] | 1,630,668,459,000 | 1,631,102,998,000 | 1,631,093,095,000 | NONE | null | null | null | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform:
- Python version: 3.7
- PyArrow version:
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2869/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2868 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2868/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2868/comments | https://api.github.com/repos/huggingface/datasets/issues/2868/events | https://github.com/huggingface/datasets/issues/2868 | 987,139,146 | MDU6SXNzdWU5ODcxMzkxNDY= | 2,868 | Add Common Objects in 3D (CO3D) | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] | open | false | null | [] | null | [] | 1,630,614,972,000 | 1,638,964,930,000 | null | CONTRIBUTOR | null | null | null | ## Adding a Dataset
- **Name:** *Common Objects in 3D (CO3D)*
- **Description:** *See blog post [here](https://ai.facebook.com/blog/common-objects-in-3d-dataset-for-3d-reconstruction)*
- **Paper:** *[link to paper](https://arxiv.org/abs/2109.00512)*
- **Data:** *[link to data](https://ai.facebook.com/datasets/co3d-downloads/)*
- **Motivation:** *excerpt from above blog post:*
> As the first data set of its kind, CO3D will aptly enable reconstruction of real-life 3D objects. Indeed, CO3D already provides training data to enable our NeRFormer to tackle the new-view synthesis (NVS) task. Here, photorealistic NVS is a major step on the path to fully immersive AR/VR effects, where objects can be virtually transported across different environments, which will allow connecting users by sharing or recollecting their experiences.
>
> Besides practical applications in AR/VR, we hope that the data set will become a standard testbed for the recent proliferation of methods (including NeRFormer, Implicit Differentiable Renderer, NeRF, and others) that reconstruct 3D scenes by means of an implicit shape model.
>
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2868/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2867 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2867/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2867/comments | https://api.github.com/repos/huggingface/datasets/issues/2867/events | https://github.com/huggingface/datasets/pull/2867 | 986,971,224 | MDExOlB1bGxSZXF1ZXN0NzI2MTE3NzAw | 2,867 | Add CaSiNo dataset | {
"login": "kushalchawla",
"id": 8416863,
"node_id": "MDQ6VXNlcjg0MTY4NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8416863?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kushalchawla",
"html_url": "https://github.com/kushalchawla",
"followers_url": "https://api.github.com/users/kushalchawla/followers",
"following_url": "https://api.github.com/users/kushalchawla/following{/other_user}",
"gists_url": "https://api.github.com/users/kushalchawla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kushalchawla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kushalchawla/subscriptions",
"organizations_url": "https://api.github.com/users/kushalchawla/orgs",
"repos_url": "https://api.github.com/users/kushalchawla/repos",
"events_url": "https://api.github.com/users/kushalchawla/events{/privacy}",
"received_events_url": "https://api.github.com/users/kushalchawla/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq \r\n\r\nJust a request to look at the dataset. Please let me know if any changes are necessary before merging it into the repo. Thank you.",
"Hey @lhoestq \r\n\r\nThanks for merging it. One question: I still cannot find the dataset on https://huggingface.co/datasets. Does it take some time or did I miss something?",
"Hi ! It takes a few hours or a day for the list of datasets on the website to be updated ;)"
] | 1,630,602,383,000 | 1,631,805,174,000 | 1,631,784,224,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2867",
"html_url": "https://github.com/huggingface/datasets/pull/2867",
"diff_url": "https://github.com/huggingface/datasets/pull/2867.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2867.patch",
"merged_at": 1631784224000
} | Hi. I request you to add our dataset to the repository.
This data was recently published at NAACL 2021: https://aclanthology.org/2021.naacl-main.254.pdf | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2867/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2866 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2866/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2866/comments | https://api.github.com/repos/huggingface/datasets/issues/2866/events | https://github.com/huggingface/datasets/issues/2866 | 986,706,676 | MDU6SXNzdWU5ODY3MDY2NzY= | 2,866 | "counter" dataset raises an error in normal mode, but not in streaming mode | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @severo, thanks for reporting.\r\n\r\nJust note that currently not all canonical datasets support streaming mode: this is one case!\r\n\r\nAll datasets that use `pathlib` joins (using `/`) instead of `os.path.join` (as in this dataset) do not support streaming mode yet.",
"OK. Do you think it's possible to detect this, and raise an exception (maybe `NotImplementedError`, or a specific `StreamingError`)?",
"We should definitely support datasets using `pathlib` in streaming mode...\r\n\r\nFor non-supported datasets in streaming mode, we have already a request of raising an error/warning: see #2654.",
"Hi @severo, please note that \"counter\" dataset will be streamable (at least until it arrives at the missing file, error already in normal mode) once these PRs are merged:\r\n- #2874\r\n- #2876\r\n- #2880\r\n\r\nI have tested it. 😉 ",
"Now (on master), we get:\r\n\r\n```\r\nimport datasets as ds\r\nds.load_dataset('counter', split=\"train\", streaming=False)\r\n```\r\n\r\n```\r\nUsing custom data configuration default\r\nDownloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 726, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 1124, in _prepare_split\r\n for key, record in utils.tqdm(\r\n File \"/home/slesage/hf/datasets/.venv/lib/python3.8/site-packages/tqdm/std.py\", line 1185, in __iter__\r\n for obj in iterable:\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py\", line 161, in _generate_examples\r\n with derived_file.open(encoding=\"utf-8\") as f:\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py\", line 1222, in open\r\n return io.open(self, mode, buffering, encoding, errors, newline,\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py\", line 1078, in _opener\r\n return self._accessor.open(self, flags, mode)\r\nFileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets/src/datasets/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 728, in _download_and_prepare\r\n raise OSError(\r\nOSError: Cannot find data file.\r\nOriginal error:\r\n[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'\r\n```\r\n\r\nThe error is now the same with or without streaming. I close the issue, thanks @albertvillanova and @lhoestq!\r\n",
"Note that we might want to open an issue to fix the \"counter\" dataset by itself, but I let it up to you.",
"Fixed here: https://github.com/huggingface/datasets/pull/2894. Thanks @albertvillanova ",
"On master, I get:\r\n\r\n```python\r\n>>> import datasets as ds\r\n>>> iterable_dataset = ds.load_dataset('counter', split=\"train\", streaming=True)\r\n>>> rows = list(iterable_dataset.take(100))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets/src/datasets/iterable_dataset.py\", line 341, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/slesage/hf/datasets/src/datasets/iterable_dataset.py\", line 338, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets/src/datasets/iterable_dataset.py\", line 273, in __iter__\r\n yield from islice(self.ex_iterable, self.n)\r\n File \"/home/slesage/hf/datasets/src/datasets/iterable_dataset.py\", line 78, in __iter__\r\n for key, example in self.generate_examples_fn(**self.kwargs):\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/b9e4378dbd3f5ce235d2302e48168c00196e67bbcd13cc7e1f6e69ef82c0cf2a/counter.py\", line 153, in _generate_examples\r\n files = sorted(base_path.glob(r\"[0-9][0-9][0-9][0-9].xml\"))\r\nTypeError: xpathglob() missing 1 required positional argument: 'pattern'\r\n```",
"Associated to the above exception, if I create a test and run it with pytest, I get an awful traceback.\r\n\r\n- create a file `test_counter.py`\r\n\r\n```python\r\nimport pytest\r\nfrom datasets import load_dataset, IterableDataset\r\nfrom typing import Any, cast\r\n\r\n\r\ndef test_counter() -> Any:\r\n iterable_dataset = cast(IterableDataset, load_dataset(\"counter\", split=\"train\", streaming=True))\r\n with pytest.raises(TypeError):\r\n list(iterable_dataset.take(100))\r\n```\r\n\r\n- run the test with pytest\r\n\r\n```bash\r\n$ python -m pytest -x test_counter.py\r\n============================================================================================================================= test session starts ==============================================================================================================================\r\nplatform linux -- Python 3.9.6, pytest-6.2.5, py-1.10.0, pluggy-1.0.0\r\nrootdir: /home/slesage/hf/datasets-preview-backend, configfile: pyproject.toml\r\nplugins: anyio-3.3.2, cov-2.12.1\r\ncollected 1 item\r\n\r\ntests/test_counter.py . [100%]Traceback (most recent call last):\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/runpy.py\", line 197, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pytest/__main__.py\", line 5, in <module>\r\n raise SystemExit(pytest.console_main())\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/_pytest/config/__init__.py\", line 185, in console_main\r\n code = main()\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/_pytest/config/__init__.py\", line 162, in main\r\n ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_hooks.py\", line 265, in __call__\r\n return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_manager.py\", line 80, in _hookexec\r\n return self._inner_hookexec(hook_name, methods, kwargs, firstresult)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_callers.py\", line 60, in _multicall\r\n return outcome.get_result()\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_result.py\", line 60, in get_result\r\n raise ex[1].with_traceback(ex[2])\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_callers.py\", line 39, in _multicall\r\n res = hook_impl.function(*args)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/_pytest/main.py\", line 316, in pytest_cmdline_main\r\n return wrap_session(config, _main)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/_pytest/main.py\", line 304, in wrap_session\r\n config.hook.pytest_sessionfinish(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_hooks.py\", line 265, in __call__\r\n return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_manager.py\", line 80, in _hookexec\r\n return self._inner_hookexec(hook_name, methods, kwargs, firstresult)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_callers.py\", line 55, in _multicall\r\n gen.send(outcome)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/_pytest/terminal.py\", line 803, in pytest_sessionfinish\r\n outcome.get_result()\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_result.py\", line 60, in get_result\r\n raise ex[1].with_traceback(ex[2])\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_callers.py\", line 39, in _multicall\r\n res = hook_impl.function(*args)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/_pytest/cacheprovider.py\", line 428, in pytest_sessionfinish\r\n config.cache.set(\"cache/nodeids\", sorted(self.cached_nodeids))\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/_pytest/cacheprovider.py\", line 188, in set\r\n f = path.open(\"w\")\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py\", line 199, in xpathopen\r\n return xopen(_as_posix(path), *args, **kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py\", line 117, in _as_posix\r\n path_as_posix = path.as_posix()\r\nAttributeError: 'str' object has no attribute 'as_posix'\r\n```\r\n",
"I opened a PR to fix these issues.\r\nAlso in your test you expect a TypeError but I don't know why. On my side it works fine without raising a TypeError",
"I had the issue (TypeError raised) on my branch, but it's fixed now. Thanks"
] | 1,630,588,253,000 | 1,634,203,449,000 | 1,634,203,449,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2866/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2866/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2865 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2865/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2865/comments | https://api.github.com/repos/huggingface/datasets/issues/2865/events | https://github.com/huggingface/datasets/pull/2865 | 986,460,698 | MDExOlB1bGxSZXF1ZXN0NzI1NjY1ODgx | 2,865 | Add MultiEURLEX dataset | {
"login": "iliaschalkidis",
"id": 1626984,
"node_id": "MDQ6VXNlcjE2MjY5ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iliaschalkidis",
"html_url": "https://github.com/iliaschalkidis",
"followers_url": "https://api.github.com/users/iliaschalkidis/followers",
"following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}",
"gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions",
"organizations_url": "https://api.github.com/users/iliaschalkidis/orgs",
"repos_url": "https://api.github.com/users/iliaschalkidis/repos",
"events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}",
"received_events_url": "https://api.github.com/users/iliaschalkidis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq, we have this new cool multilingual dataset coming at EMNLP 2021. It would be really nice if we could have it in Hugging Face asap. Thanks! ",
"Hi @lhoestq, I adopted most of your suggestions:\r\n\r\n- Dummy data files reduced, including the 2 smallest documents per subset JSONL.\r\n- README was updated with the publication URL and instructions on how to download and use label descriptors. Excessive newlines were deleted.\r\n\r\nI would prefer to keep the label list in a pure format (original ids), to enable people to combine those with more information or possibly in the future explore the dataset, find inconsistencies and fix those to release a new version. ",
"Thanks for the changes :)\r\n\r\nRegarding the labels:\r\n\r\nIf you use the ClassLabel feature type, the only change is that it will store the ids as integers instead of (currently) string.\r\nThe advantage is that if people want to know what id corresponds to which label name, they can use `classlabel.int2str`. It is also the format that helps automate model training for classification in `transformers`.\r\n\r\nLet me know if that sounds good to you or if you still want to stick with the labels as they are now.",
"Hey @lhoestq, thanks for providing this information. This sounds great. I updated my code accordingly to use `ClassLabel`. Could you please provide a minimal example of how `classlabel.int2str` works in practice in my case, where labels are a sequence?\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('multi_eurlex', 'all_languages')\r\n# Read strs from the labels (list of integers) for the 1st sample of the training split\r\n```\r\n\r\nI would like to include this in the README file.\r\n\r\nCould you also provide some info on how I could define the supervized key to automate model training, as you said?\r\n\r\nThanks!",
"Thanks for the update :)\r\n\r\nHere is an example of usage:\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('multi_eurlex', 'all_languages', split='train')\r\nclasslabel = dataset.features[\"labels\"].feature\r\nprint(dataset[0][\"labels\"])\r\n# [1, 20, 7, 3, 0]\r\nprint(classlabel.int2str(dataset[0][\"labels\"]))\r\n# ['100160', '100155', '100158', '100147', '100149']\r\n```\r\n\r\nThe ClassLabel is simply used to define the `id2label` dictionary of classification models, to make the ids match between the model and the dataset. There nothing more to do :p \r\n\r\nI think one last thing to do is just update the `dataset_infos.json` file and we'll be good !",
"Everything is ready! 👍 \r\n"
] | 1,630,575,744,000 | 1,631,274,606,000 | 1,631,274,606,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2865",
"html_url": "https://github.com/huggingface/datasets/pull/2865",
"diff_url": "https://github.com/huggingface/datasets/pull/2865.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2865.patch",
"merged_at": 1631274606000
} | **Add new MultiEURLEX Dataset**
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2865/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2865/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2864 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2864/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2864/comments | https://api.github.com/repos/huggingface/datasets/issues/2864/events | https://github.com/huggingface/datasets/pull/2864 | 986,159,438 | MDExOlB1bGxSZXF1ZXN0NzI1MzkyNjcw | 2,864 | Fix data URL in ToTTo dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/8",
"html_url": "https://github.com/huggingface/datasets/milestone/8",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels",
"id": 6968069,
"node_id": "MI_kwDODunzps4AalMF",
"number": 8,
"title": "1.12",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 4,
"closed_issues": 2,
"state": "open",
"created_at": 1626881696000,
"updated_at": 1634120793000,
"due_on": 1630306800000,
"closed_at": null
} | [] | 1,630,560,308,000 | 1,630,565,260,000 | 1,630,565,260,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2864",
"html_url": "https://github.com/huggingface/datasets/pull/2864",
"diff_url": "https://github.com/huggingface/datasets/pull/2864.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2864.patch",
"merged_at": 1630565260000
} | Data source host changed their data URL: google-research-datasets/ToTTo@cebeb43.
Fix #2860. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2864/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2863 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2863/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2863/comments | https://api.github.com/repos/huggingface/datasets/issues/2863/events | https://github.com/huggingface/datasets/pull/2863 | 986,156,755 | MDExOlB1bGxSZXF1ZXN0NzI1MzkwMTkx | 2,863 | Update dataset URL | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Superseded by PR #2864.\r\n\r\n@mrm8488 next time you would like to work on an issue, you can first self-assign it to you (by writing `#self-assign` in a comment on the issue). That way, other people can see you are already working on it and there are not multiple people working on the same issue. 😉 "
] | 1,630,560,138,000 | 1,630,570,250,000 | 1,630,570,250,000 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2863",
"html_url": "https://github.com/huggingface/datasets/pull/2863",
"diff_url": "https://github.com/huggingface/datasets/pull/2863.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2863.patch",
"merged_at": null
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2863/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2862 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2862/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2862/comments | https://api.github.com/repos/huggingface/datasets/issues/2862/events | https://github.com/huggingface/datasets/issues/2862 | 985,763,001 | MDU6SXNzdWU5ODU3NjMwMDE= | 2,862 | Only retain relevant statistics in certain metrics | {
"login": "ZhaofengWu",
"id": 11954789,
"node_id": "MDQ6VXNlcjExOTU0Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhaofengWu",
"html_url": "https://github.com/ZhaofengWu",
"followers_url": "https://api.github.com/users/ZhaofengWu/followers",
"following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions",
"organizations_url": "https://api.github.com/users/ZhaofengWu/orgs",
"repos_url": "https://api.github.com/users/ZhaofengWu/repos",
"events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhaofengWu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,630,534,690,000 | 1,630,534,690,000 | null | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
As I understand, in the `add_batch()` function, the raw predictions and references are kept (in memory?) until `compute()` is called.
https://github.com/huggingface/datasets/blob/e248247518140d5b0527ce2843a1a327e2902059/src/datasets/metric.py#L423-L442
This takes O(n) memory. However, for many (most?) metrics, this is not necessary. E.g., for accuracy, only the # correct and # total need to be recorded.
**Describe the solution you'd like**
Probably an inheritance hierarchy where `"predictions"` and `"references"` are not always the two keys for the final metric computation. Each metric should create and maintain its own relevant statistics, again for example, `"n_correct"` and `"n_total"` for accuracy.
I believe the metrics in AllenNLP (https://github.com/allenai/allennlp/tree/39c40fe38cd2fd36b3465b0b3c031f54ec824160/allennlp/training/metrics) can be used as a good reference.
**Describe alternatives you've considered**
At least `Metric.compute()` shouldn't hard-code `"predictions"` and `"references"` so that custom subclasses may override this behavior.
https://github.com/huggingface/datasets/blob/e248247518140d5b0527ce2843a1a327e2902059/src/datasets/metric.py#L399-L400 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2862/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2861 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2861/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2861/comments | https://api.github.com/repos/huggingface/datasets/issues/2861/events | https://github.com/huggingface/datasets/pull/2861 | 985,081,871 | MDExOlB1bGxSZXF1ZXN0NzI0NDM2OTcw | 2,861 | fix: 🐛 be more specific when catching exceptions | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"To give more context: after our discussion, if I understood properly, you are trying to fix a call to `datasets` that takes 15 minutes: https://github.com/huggingface/datasets-preview-backend/issues/17 Is this right?\r\n\r\n",
"Yes, that's it. And to do that I'm trying to use https://pypi.org/project/stopit/, which will raise a stopit.TimeoutException exception. But currently, if this exception is raised, it's caught and considered as a \"FileNotFoundError\" while it should not be caught. ",
"And what about passing the `timeout` parameter instead?",
"It might be a good idea, but I would have to add a timeout argument to several methods, I'm not sure we want that (I want to ensure all my queries in https://github.com/huggingface/datasets-preview-backend/tree/master/src/datasets_preview_backend/queries resolve in a given time, be it with an error in case of timeout, or with the successful response). The methods are `prepare_module`, `import_main_class`, *builder_cls.*`get_all_exported_dataset_infos`, `load_dataset_builder`, and `load_dataset`",
"I understand, you are trying to find a fix for your use case. OK.\r\n\r\nJust note that it is also an issue for `datasets` users. Once #2859 fixed in `datasets`, you will no longer have this issue...",
"Closing, since 1. my problem is more #2859, and I was asking for that change in order to make a hack work on my side, 2. if we want to change how exceptions are handled, we surely want to do it on all the codebase, not only in this particular case."
] | 1,630,498,692,000 | 1,630,576,416,000 | 1,630,576,323,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2861",
"html_url": "https://github.com/huggingface/datasets/pull/2861",
"diff_url": "https://github.com/huggingface/datasets/pull/2861.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2861.patch",
"merged_at": null
} | The same specific exception is catched in other parts of the same
function. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2861/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2860 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2860/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2860/comments | https://api.github.com/repos/huggingface/datasets/issues/2860/events | https://github.com/huggingface/datasets/issues/2860 | 985,013,339 | MDU6SXNzdWU5ODUwMTMzMzk= | 2,860 | Cannot download TOTTO dataset | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hola @mrm8488, thanks for reporting.\r\n\r\nApparently, the data source host changed their URL one week ago: https://github.com/google-research-datasets/ToTTo/commit/cebeb430ec2a97747e704d16a9354f7d9073ff8f\r\n\r\nI'm fixing it."
] | 1,630,494,250,000 | 1,630,565,260,000 | 1,630,565,260,000 | NONE | null | null | null | Error: Couldn't find file at https://storage.googleapis.com/totto/totto_data.zip
`datasets version: 1.11.0`
# How to reproduce:
```py
from datasets import load_dataset
dataset = load_dataset('totto')
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2860/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2859 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2859/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2859/comments | https://api.github.com/repos/huggingface/datasets/issues/2859/events | https://github.com/huggingface/datasets/issues/2859 | 984,324,500 | MDU6SXNzdWU5ODQzMjQ1MDA= | 2,859 | Loading allenai/c4 in streaming mode does too many HEAD requests | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"https://github.com/huggingface/datasets/blob/6c766f9115d686182d76b1b937cb27e099c45d68/src/datasets/builder.py#L179-L186",
"Thanks a lot!!!"
] | 1,630,444,264,000 | 1,634,024,152,000 | 1,633,950,351,000 | MEMBER | null | null | null | This does 60,000+ HEAD requests to get all the ETags of all the data files:
```python
from datasets import load_dataset
load_dataset("allenai/c4", streaming=True)
```
It makes loading the dataset completely impractical.
The ETags are used to compute the config id (it must depend on the data files being used).
Instead of using the ETags, we could simply use the commit hash of the dataset repository on the hub, as well and the glob pattern used to resolve the files (here it's `*` by default, to load all the files of the repository) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2859/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2859/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2858 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2858/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2858/comments | https://api.github.com/repos/huggingface/datasets/issues/2858/events | https://github.com/huggingface/datasets/pull/2858 | 984,145,568 | MDExOlB1bGxSZXF1ZXN0NzIzNjEzNzQ0 | 2,858 | Fix s3fs version in CI | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,630,433,143,000 | 1,630,935,215,000 | 1,630,445,391,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2858",
"html_url": "https://github.com/huggingface/datasets/pull/2858",
"diff_url": "https://github.com/huggingface/datasets/pull/2858.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2858.patch",
"merged_at": 1630445391000
} | The latest s3fs version has new constrains on aiobotocore, and therefore on boto3 and botocore
This PR changes the constrains to avoid the new conflicts
In particular it pins the version of s3fs. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2858/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2857 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2857/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2857/comments | https://api.github.com/repos/huggingface/datasets/issues/2857/events | https://github.com/huggingface/datasets/pull/2857 | 984,093,938 | MDExOlB1bGxSZXF1ZXN0NzIzNTY5OTE4 | 2,857 | Update: Openwebtext - update size | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"merging since the CI error in unrelated to this PR and fixed on master"
] | 1,630,429,863,000 | 1,631,007,872,000 | 1,631,007,872,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2857",
"html_url": "https://github.com/huggingface/datasets/pull/2857",
"diff_url": "https://github.com/huggingface/datasets/pull/2857.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2857.patch",
"merged_at": 1631007872000
} | Update the size of the Openwebtext dataset
I also regenerated the dataset_infos.json but the data file checksum didn't change, and the number of examples either (8013769 examples)
related to #2839 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2857/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2856 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2856/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2856/comments | https://api.github.com/repos/huggingface/datasets/issues/2856/events | https://github.com/huggingface/datasets/pull/2856 | 983,876,734 | MDExOlB1bGxSZXF1ZXN0NzIzMzg2NzIw | 2,856 | fix: 🐛 remove URL's query string only if it's ?dl=1 | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,630,417,207,000 | 1,630,419,732,000 | 1,630,419,732,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2856",
"html_url": "https://github.com/huggingface/datasets/pull/2856",
"diff_url": "https://github.com/huggingface/datasets/pull/2856.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2856.patch",
"merged_at": 1630419732000
} | A lot of URL use the query strings, for example
http://opus.nlpl.eu/download.php?f=Bianet/v1/moses/en-ku.txt.zip, we
must not remove it when trying to detect the protocol. We thus remove it
only in the case of the query string being ?dl=1 which occurs on dropbox
and dl.orangedox.com. Also: add unit tests.
See https://github.com/huggingface/datasets/pull/2843 for the original
discussion. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2856/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2855 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2855/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2855/comments | https://api.github.com/repos/huggingface/datasets/issues/2855/events | https://github.com/huggingface/datasets/pull/2855 | 983,858,229 | MDExOlB1bGxSZXF1ZXN0NzIzMzcxMTIy | 2,855 | Fix windows CI CondaError | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,630,416,122,000 | 1,630,416,934,000 | 1,630,416,933,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2855",
"html_url": "https://github.com/huggingface/datasets/pull/2855",
"diff_url": "https://github.com/huggingface/datasets/pull/2855.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2855.patch",
"merged_at": 1630416933000
} | From this thread: https://github.com/conda/conda/issues/6057
We can fix the conda error
```
CondaError: Cannot link a source that does not exist.
C:\Users\...\Anaconda3\Scripts\conda.exe
```
by doing
```bash
conda update conda
```
before doing any install in the windows CI | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2855/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2854 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2854/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2854/comments | https://api.github.com/repos/huggingface/datasets/issues/2854/events | https://github.com/huggingface/datasets/pull/2854 | 983,726,084 | MDExOlB1bGxSZXF1ZXN0NzIzMjU3NDg5 | 2,854 | Fix caching when moving script | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Merging since the CI failure is unrelated to this PR"
] | 1,630,407,515,000 | 1,630,415,616,000 | 1,630,415,616,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2854",
"html_url": "https://github.com/huggingface/datasets/pull/2854",
"diff_url": "https://github.com/huggingface/datasets/pull/2854.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2854.patch",
"merged_at": 1630415616000
} | When caching the result of a `map` function, the hash that is computed depends on many properties of this function, such as all the python objects it uses, its code and also the location of this code.
Using the full path of the python script for the location of the code makes the hash change if a script like `run_mlm.py` is moved.
I changed this by simply using the base name of the script instead of the full path.
Note that this change also affects the hash of the code used from imported modules, but I think it's fine. Indeed it hashes the code of the imported modules anyway, so the location of the python files of the imported modules doesn't matter when computing the hash.
Close https://github.com/huggingface/datasets/issues/2825 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2854/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2854/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2853 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2853/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2853/comments | https://api.github.com/repos/huggingface/datasets/issues/2853/events | https://github.com/huggingface/datasets/pull/2853 | 983,692,026 | MDExOlB1bGxSZXF1ZXN0NzIzMjI4NDY3 | 2,853 | Add AMI dataset | {
"login": "cahya-wirawan",
"id": 7669893,
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cahya-wirawan",
"html_url": "https://github.com/cahya-wirawan",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hey @cahya-wirawan, \r\n\r\nI played around with the dataset a bit and it looks already very good to me! That's exactly how it should be constructed :-) I can help you a bit with defining the config, etc... on Monday!",
"@lhoestq - I think the dataset is ready to be merged :-) \r\n\r\nAt the moment, I don't really see how the failing tests correspond to this PR:\r\n- https://app.circleci.com/pipelines/github/huggingface/datasets/7838/workflows/932a40a2-3e11-48be-84f0-c6434510058e/jobs/48318?invite=true#step-107-18\r\n- https://app.circleci.com/pipelines/github/huggingface/datasets/7838/workflows/932a40a2-3e11-48be-84f0-c6434510058e/jobs/48316?invite=true#step-102-136\r\n\r\ncould you maybe give it a look? :-)"
] | 1,630,405,141,000 | 1,632,907,159,000 | 1,632,907,159,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2853",
"html_url": "https://github.com/huggingface/datasets/pull/2853",
"diff_url": "https://github.com/huggingface/datasets/pull/2853.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2853.patch",
"merged_at": 1632907158000
} | This is an initial commit for AMI dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2853/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2853/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2852 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2852/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2852/comments | https://api.github.com/repos/huggingface/datasets/issues/2852/events | https://github.com/huggingface/datasets/pull/2852 | 983,609,352 | MDExOlB1bGxSZXF1ZXN0NzIzMTU4Mzc4 | 2,852 | Fix: linnaeus - fix url | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Merging since the CI error is unrelated this this PR"
] | 1,630,399,873,000 | 1,630,415,530,000 | 1,630,415,529,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2852",
"html_url": "https://github.com/huggingface/datasets/pull/2852",
"diff_url": "https://github.com/huggingface/datasets/pull/2852.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2852.patch",
"merged_at": 1630415529000
} | The url was causing a `ConnectionError` because of the "/" at the end
Close https://github.com/huggingface/datasets/issues/2821 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2852/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2851 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2851/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2851/comments | https://api.github.com/repos/huggingface/datasets/issues/2851/events | https://github.com/huggingface/datasets/pull/2851 | 982,789,593 | MDExOlB1bGxSZXF1ZXN0NzIyNDg4MDY2 | 2,851 | Update `column_names` showed as `:func:` in exploring.st | {
"login": "ClementRomac",
"id": 8899812,
"node_id": "MDQ6VXNlcjg4OTk4MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8899812?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ClementRomac",
"html_url": "https://github.com/ClementRomac",
"followers_url": "https://api.github.com/users/ClementRomac/followers",
"following_url": "https://api.github.com/users/ClementRomac/following{/other_user}",
"gists_url": "https://api.github.com/users/ClementRomac/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ClementRomac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ClementRomac/subscriptions",
"organizations_url": "https://api.github.com/users/ClementRomac/orgs",
"repos_url": "https://api.github.com/users/ClementRomac/repos",
"events_url": "https://api.github.com/users/ClementRomac/events{/privacy}",
"received_events_url": "https://api.github.com/users/ClementRomac/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,630,329,706,000 | 1,630,485,731,000 | 1,630,421,146,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2851",
"html_url": "https://github.com/huggingface/datasets/pull/2851",
"diff_url": "https://github.com/huggingface/datasets/pull/2851.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2851.patch",
"merged_at": 1630421146000
} | Hi,
One mention of `column_names` in exploring.st was showing it as `:func:` instead of `:attr:`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2851/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2850 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2850/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2850/comments | https://api.github.com/repos/huggingface/datasets/issues/2850/events | https://github.com/huggingface/datasets/issues/2850 | 982,654,644 | MDU6SXNzdWU5ODI2NTQ2NDQ= | 2,850 | Wound segmentation datasets | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] | open | false | null | [] | null | [] | 1,630,320,272,000 | 1,638,964,920,000 | null | NONE | null | null | null | ## Adding a Dataset
- **Name:** Wound segmentation datasets
- **Description:** annotated wound image dataset
- **Paper:** https://www.nature.com/articles/s41598-020-78799-w
- **Data:** https://github.com/uwm-bigdata/wound-segmentation
- **Motivation:** Interesting simple image dataset, useful for segmentation, with visibility due to http://www.miccai.org/special-interest-groups/challenges/ and https://fusc.grand-challenge.org/
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2850/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2849 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2849/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2849/comments | https://api.github.com/repos/huggingface/datasets/issues/2849/events | https://github.com/huggingface/datasets/issues/2849 | 982,631,420 | MDU6SXNzdWU5ODI2MzE0MjA= | 2,849 | Add Open Catalyst Project Dataset | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [] | 1,630,318,479,000 | 1,630,318,479,000 | null | NONE | null | null | null | ## Adding a Dataset
- **Name:** Open Catalyst 2020 (OC20) Dataset
- **Website:** https://opencatalystproject.org/
- **Data:** https://github.com/Open-Catalyst-Project/ocp/blob/master/DATASET.md
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2849/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2848 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2848/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2848/comments | https://api.github.com/repos/huggingface/datasets/issues/2848/events | https://github.com/huggingface/datasets/pull/2848 | 981,953,908 | MDExOlB1bGxSZXF1ZXN0NzIxODYyMDQx | 2,848 | Update README.md | {
"login": "odellus",
"id": 4686956,
"node_id": "MDQ6VXNlcjQ2ODY5NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4686956?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/odellus",
"html_url": "https://github.com/odellus",
"followers_url": "https://api.github.com/users/odellus/followers",
"following_url": "https://api.github.com/users/odellus/following{/other_user}",
"gists_url": "https://api.github.com/users/odellus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/odellus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/odellus/subscriptions",
"organizations_url": "https://api.github.com/users/odellus/orgs",
"repos_url": "https://api.github.com/users/odellus/repos",
"events_url": "https://api.github.com/users/odellus/events{/privacy}",
"received_events_url": "https://api.github.com/users/odellus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Merging since the CI error is unrelated to this PR and fixed on master"
] | 1,630,195,106,000 | 1,631,007,632,000 | 1,631,007,632,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2848",
"html_url": "https://github.com/huggingface/datasets/pull/2848",
"diff_url": "https://github.com/huggingface/datasets/pull/2848.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2848.patch",
"merged_at": 1631007632000
} | Changed 'Tain' to 'Train'. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2848/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2847 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2847/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2847/comments | https://api.github.com/repos/huggingface/datasets/issues/2847/events | https://github.com/huggingface/datasets/pull/2847 | 981,589,693 | MDExOlB1bGxSZXF1ZXN0NzIxNjA3OTA0 | 2,847 | fix regex to accept negative timezone | {
"login": "jadermcs",
"id": 7156771,
"node_id": "MDQ6VXNlcjcxNTY3NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7156771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jadermcs",
"html_url": "https://github.com/jadermcs",
"followers_url": "https://api.github.com/users/jadermcs/followers",
"following_url": "https://api.github.com/users/jadermcs/following{/other_user}",
"gists_url": "https://api.github.com/users/jadermcs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jadermcs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jadermcs/subscriptions",
"organizations_url": "https://api.github.com/users/jadermcs/orgs",
"repos_url": "https://api.github.com/users/jadermcs/repos",
"events_url": "https://api.github.com/users/jadermcs/events{/privacy}",
"received_events_url": "https://api.github.com/users/jadermcs/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,630,097,645,000 | 1,631,565,590,000 | 1,631,007,263,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2847",
"html_url": "https://github.com/huggingface/datasets/pull/2847",
"diff_url": "https://github.com/huggingface/datasets/pull/2847.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2847.patch",
"merged_at": 1631007263000
} | fix #2846 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2847/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2846 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2846/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2846/comments | https://api.github.com/repos/huggingface/datasets/issues/2846/events | https://github.com/huggingface/datasets/issues/2846 | 981,587,590 | MDU6SXNzdWU5ODE1ODc1OTA= | 2,846 | Negative timezone | {
"login": "jadermcs",
"id": 7156771,
"node_id": "MDQ6VXNlcjcxNTY3NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7156771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jadermcs",
"html_url": "https://github.com/jadermcs",
"followers_url": "https://api.github.com/users/jadermcs/followers",
"following_url": "https://api.github.com/users/jadermcs/following{/other_user}",
"gists_url": "https://api.github.com/users/jadermcs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jadermcs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jadermcs/subscriptions",
"organizations_url": "https://api.github.com/users/jadermcs/orgs",
"repos_url": "https://api.github.com/users/jadermcs/repos",
"events_url": "https://api.github.com/users/jadermcs/events{/privacy}",
"received_events_url": "https://api.github.com/users/jadermcs/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Fixed by #2847."
] | 1,630,097,433,000 | 1,631,274,667,000 | 1,631,274,667,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
The load_dataset method do not accept a parquet file with a negative timezone, as it has the following regex:
```
"^(s|ms|us|ns),\s*tz=([a-zA-Z0-9/_+:]*)$"
```
So a valid timestap ```timestamp[us, tz=-03:00]``` returns an error when loading parquet files.
## Steps to reproduce the bug
```python
# Where the timestamp column has a tz of -03:00
datasets = load_dataset('parquet', data_files={'train': train_files, 'validation': validation_files,
'test': test_files}, cache_dir="./cache_teste/")
```
## Expected results
The -03:00 is a valid tz so the regex should accept this without raising an error.
## Actual results
As this regex disaproves a valid tz it raises the following error:
```python
raise ValueError(
f"{datasets_dtype} is not a validly formatted string representation of a pyarrow timestamp."
f"Examples include timestamp[us] or timestamp[us, tz=America/New_York]"
f"See: https://arrow.apache.org/docs/python/generated/pyarrow.timestamp.html#pyarrow.timestamp"
)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform: Ubuntu 20.04
- Python version: 3.8
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2846/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2845 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2845/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2845/comments | https://api.github.com/repos/huggingface/datasets/issues/2845/events | https://github.com/huggingface/datasets/issues/2845 | 981,487,861 | MDU6SXNzdWU5ODE0ODc4NjE= | 2,845 | [feature request] adding easy to remember `datasets.cache_dataset()` + `datasets.is_dataset_cached()` | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,630,088,511,000 | 1,630,088,645,000 | null | CONTRIBUTOR | null | null | null | Often, there is a need to prepare a dataset but not use it immediately, e.g. think tests suite setup, so it'd be really useful to be able to do:
```
if not datasets.is_dataset_cached(ds): datasets.cache_dataset(ds)
```
This can already be done with:
```
builder = load_dataset_builder(ds)
if not os.path.idsir(builder.cache_dir):
builder.download_and_prepare()
```
but the current way is a way less intuitive and much harder to remember than the proposed API, IMHO.
One more way is to do:
```
_ = load_dataset(ds)
```
but it wastes resources loading the dataset when it's not needed.
this has been discussed at https://huggingface.slack.com/archives/C01229B19EX/p1630021912025800
Thank you!
@lhoestq
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2845/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2844 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2844/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2844/comments | https://api.github.com/repos/huggingface/datasets/issues/2844/events | https://github.com/huggingface/datasets/pull/2844 | 981,382,806 | MDExOlB1bGxSZXF1ZXN0NzIxNDQzMjY2 | 2,844 | Fix: wikicorpus - fix keys | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The CI error is unrelated to this PR\r\n\r\n... merging !"
] | 1,630,079,766,000 | 1,630,937,248,000 | 1,630,937,247,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2844",
"html_url": "https://github.com/huggingface/datasets/pull/2844",
"diff_url": "https://github.com/huggingface/datasets/pull/2844.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2844.patch",
"merged_at": 1630937247000
} | As mentioned in https://github.com/huggingface/datasets/issues/2552, there is a duplicate keys error in `wikicorpus`.
I fixed that by taking into account the file index in the keys | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2844/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2843 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2843/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2843/comments | https://api.github.com/repos/huggingface/datasets/issues/2843/events | https://github.com/huggingface/datasets/pull/2843 | 981,317,775 | MDExOlB1bGxSZXF1ZXN0NzIxMzkwODA5 | 2,843 | Fix extraction protocol inference from urls with params | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"merging since the windows error is just a CircleCI issue",
"It works, eg https://observablehq.com/@huggingface/datasets-preview-backend-client#{%22datasetId%22%3A%22discovery%22} and https://datasets-preview.huggingface.tech/rows?dataset=discovery&config=discovery&split=train",
"Nice !"
] | 1,630,075,257,000 | 1,630,343,509,000 | 1,630,329,121,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2843",
"html_url": "https://github.com/huggingface/datasets/pull/2843",
"diff_url": "https://github.com/huggingface/datasets/pull/2843.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2843.patch",
"merged_at": 1630329121000
} | Previously it was unable to infer the compression protocol for files at URLs like
```
https://foo.bar/train.json.gz?dl=1
```
because of the query parameters.
I fixed that, this should allow 10+ datasets to work in streaming mode:
```
"discovery",
"emotion",
"grail_qa",
"guardian_authorship",
"pragmeval",
"simple_questions_v2",
"versae/adobo",
"w-nicole/childes_data",
"w-nicole/childes_data_no_tags_",
"w-nicole/childes_data_with_tags",
"w-nicole/childes_data_with_tags_"
```
cc @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2843/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2842 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2842/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2842/comments | https://api.github.com/repos/huggingface/datasets/issues/2842/events | https://github.com/huggingface/datasets/issues/2842 | 980,725,899 | MDU6SXNzdWU5ODA3MjU4OTk= | 2,842 | always requiring the username in the dataset name when there is one | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"From what I can understand, you want the saved arrow file directory to have username as well instead of just dataset name if it was downloaded with the user prefix?",
"I don't think the user cares of how this is done, but the 2nd command should fail, IMHO, as its dataset name is invalid:\r\n```\r\n# first run\r\npython -c \"from datasets import load_dataset; load_dataset('stas/openwebtext-10k')\"\r\n# now run immediately\r\npython -c \"from datasets import load_dataset; load_dataset('openwebtext-10k')\"\r\n# the second command should fail, but it doesn't fail now.\r\n```\r\n\r\nMoreover, if someone were to create `openwebtext-10k` w/o the prefix, they will now get the wrong dataset, if they previously downloaded `stas/openwebtext-10k`.\r\n\r\nAnd if there are 2 users with the same dataset name `foo/ds` and `bar/ds` - currently this won't work to get the correct dataset.\r\n\r\nSo really there 3 unrelated issues hiding in the current behavior.",
"This has been fixed now, and we'll do a new release of the library today.\r\n\r\nNow the stas/openwebtext-10k dataset is cached at `.cache/huggingface/datasets/stas___openwebtext10k` and openwebtext-10k would be at `.cache/huggingface/datasets/openwebtext10k`. Since they are different, the cache won't fall back on loading the wrong one anymore.\r\n\r\nSame for the python script used to generate the dataset: stas/openwebtext-10k is cached at `.cache/huggingface/modules/datasets_modules/datasets/stas___openwebtext10k` and openwebtext-10k would be at `.cache/huggingface/modules/datasets_modules/datasets/openwebtext10k`",
"Amazing! Thank you for adding this improvement, @lhoestq!",
"(can be closed?)",
"Yes indeed :) thanks"
] | 1,630,020,713,000 | 1,634,895,815,000 | 1,634,895,815,000 | CONTRIBUTOR | null | null | null | Me and now another person have been bitten by the `datasets`'s non-strictness on requiring a dataset creator's username when it's due.
So both of us started with `stas/openwebtext-10k`, somewhere along the lines lost `stas/` and continued using `openwebtext-10k` and it all was good until we published the software and things broke, since there is no `openwebtext-10k`
So this feature request is asking to tighten the checking and not allow dataset loading if it was downloaded with the user prefix, but then attempted to be used w/o it.
The same in code:
```
# first run
python -c "from datasets import load_dataset; load_dataset('stas/openwebtext-10k')"
# now run immediately
python -c "from datasets import load_dataset; load_dataset('openwebtext-10k')"
# the second command should fail, but it doesn't fail now.
```
Please let me know if I explained myself clearly.
Thank you! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2842/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2842/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2841 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2841/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2841/comments | https://api.github.com/repos/huggingface/datasets/issues/2841/events | https://github.com/huggingface/datasets/issues/2841 | 980,497,321 | MDU6SXNzdWU5ODA0OTczMjE= | 2,841 | Adding GLUECoS Hinglish and Spanglish code-switching bemchmark | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [
"Hi @yjernite I am interested in adding this dataset. \r\nIn the repo they have also added a code mixed MT task from English to Hinglish [here](https://github.com/microsoft/GLUECoS#code-mixed-machine-translation-task). I think this could be a good dataset addition in itself and then I can add the rest of the GLUECoS tasks as one dataset. What do you think?"
] | 1,630,000,059,000 | 1,634,755,280,000 | null | MEMBER | null | null | null | ## Adding a Dataset
- **Name:** GLUECoS
- **Description:** a Microsoft Benchmark to evaluate code-switching for only two language pairs but a variety of tasks
- **Paper:** https://aclanthology.org/2020.acl-main.329/
- **Data:** https://github.com/microsoft/GLUECoS
- **Motivation:** We currently only have [one other](https://huggingface.co/datasets/lince) dataset for code-switching
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2841/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2840 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2840/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2840/comments | https://api.github.com/repos/huggingface/datasets/issues/2840/events | https://github.com/huggingface/datasets/issues/2840 | 980,489,074 | MDU6SXNzdWU5ODA0ODkwNzQ= | 2,840 | How can I compute BLEU-4 score use `load_metric` ? | {
"login": "Doragd",
"id": 26213546,
"node_id": "MDQ6VXNlcjI2MjEzNTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/26213546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Doragd",
"html_url": "https://github.com/Doragd",
"followers_url": "https://api.github.com/users/Doragd/followers",
"following_url": "https://api.github.com/users/Doragd/following{/other_user}",
"gists_url": "https://api.github.com/users/Doragd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Doragd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Doragd/subscriptions",
"organizations_url": "https://api.github.com/users/Doragd/orgs",
"repos_url": "https://api.github.com/users/Doragd/repos",
"events_url": "https://api.github.com/users/Doragd/events{/privacy}",
"received_events_url": "https://api.github.com/users/Doragd/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,629,999,397,000 | 1,630,052,004,000 | 1,630,052,004,000 | NONE | null | null | null | I have found the sacrebleu metric. But, I do not know the difference between it and BLEU-4.
If I want to compute BLEU-4 score, what can i do? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2840/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2839 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2839/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2839/comments | https://api.github.com/repos/huggingface/datasets/issues/2839/events | https://github.com/huggingface/datasets/issues/2839 | 980,271,715 | MDU6SXNzdWU5ODAyNzE3MTU= | 2,839 | OpenWebText: NonMatchingSplitsSizesError | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, I'm updating the verifications metadata",
"I just regenerated the verifications metadata and noticed that nothing changed: the data file is fine (the checksum didn't change), and the number of examples is still 8013769. Not sure how you managed to get 7982430 examples.\r\n\r\nCan you try to delete your cache ( by default at `~/.cache/huggingface/datasets`) and try again please ?\r\nAlso, on which platform are you (linux/macos/windows) ?",
"I'll try without deleting the whole cache (we have large datasets already stored). I was under the impression that `download_mode=\"force_redownload\"` would bypass cache.\r\nSorry plateform should be linux (Redhat version 8.1)",
"Hi @thomasw21 , are you still having this issue after clearing your cache ?",
"Sorry I haven't had time to work on this. I'll close and re-open if I can't figure out why I'm having this issue. Thanks for taking a look !"
] | 1,629,985,826,000 | 1,632,233,560,000 | 1,632,233,383,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
When downloading `openwebtext`, I'm getting:
```
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', num_bytes=39611023912, num_examples=7982430, dataset_name='openwebtext')}]
```
I suspect that the file we download from has changed since the size doesn't look like to match with documentation
`Downloading: 0%| | 0.00/12.9G [00:00<?, ?B/s]` This suggest the total size is 12.9GB, whereas the one documented mentions `Size of downloaded dataset files: 12283.35 MB`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("openwebtext", download_mode="force_redownload")
```
## Expected results
Loading is successful
## Actual results
Loading throws above error.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.10.2
- Platform: linux (Redhat version 8.1)
- Python version: 3.8
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2839/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2838 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2838/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2838/comments | https://api.github.com/repos/huggingface/datasets/issues/2838/events | https://github.com/huggingface/datasets/pull/2838 | 980,067,186 | MDExOlB1bGxSZXF1ZXN0NzIwMzcxMDk5 | 2,838 | Add error_bad_chunk to the JSON loader | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,629,972,452,000 | 1,629,972,486,000 | null | MEMBER | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2838",
"html_url": "https://github.com/huggingface/datasets/pull/2838",
"diff_url": "https://github.com/huggingface/datasets/pull/2838.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2838.patch",
"merged_at": null
} | Add the `error_bad_chunk` parameter to the JSON loader.
Setting `error_bad_chunk=False` allows to skip an unparsable chunk of JSON data without raising an error.
Additional note:
In case of an unparsable JSON chunk, the JSON loader no longer tries to load the full JSON (which could take a lot of time in streaming mode) to get the JSON fields that the user may have forgotten to pass. Ex : for squad-like data, the user has to pass `field="data"` to tell the loader to get the list of examples from this field.
TODO: update docs
cc @lvwerra | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2838/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2837 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2837/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2837/comments | https://api.github.com/repos/huggingface/datasets/issues/2837/events | https://github.com/huggingface/datasets/issues/2837 | 979,298,297 | MDU6SXNzdWU5NzkyOTgyOTc= | 2,837 | prepare_module issue when loading from read-only fs | {
"login": "Dref360",
"id": 8976546,
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dref360",
"html_url": "https://github.com/Dref360",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"repos_url": "https://api.github.com/users/Dref360/repos",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hello, I opened #2887 to fix this."
] | 1,629,904,886,000 | 1,633,456,702,000 | 1,633,456,702,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
When we use prepare_module from a readonly file system, we create a FileLock using the `local_path`.
This path is not necessarily writable.
`lock_path = local_path + ".lock"`
## Steps to reproduce the bug
Run `load_dataset` on a readonly python loader file.
```python
ds = load_dataset(
python_loader, data_files={"train": train_path, "test": test_path}
)
```
where `python_loader` is a path to a file located in a readonly folder.
## Expected results
This should work I think?
## Actual results
```python
return load_dataset(
File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 711, in load_dataset
module_path, hash, resolved_file_path = prepare_module(
File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 465, in prepare_module
with FileLock(lock_path):
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/filelock.py", line 314, in __enter__
self.acquire()
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/filelock.py", line 263, in acquire
self._acquire()
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/filelock.py", line 378, in _acquire
fd = os.open(self._lock_file, open_mode)
OSError: [Errno 30] Read-only file system: 'YOUR_FILE.py.lock'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.7.0
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.8
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2837/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2836 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2836/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2836/comments | https://api.github.com/repos/huggingface/datasets/issues/2836/events | https://github.com/huggingface/datasets/pull/2836 | 979,230,142 | MDExOlB1bGxSZXF1ZXN0NzE5NjY5MDUy | 2,836 | Optimize Dataset.filter to only compute the indices to keep | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Maybe worth updating the docs here as well?",
"Yup, will do !"
] | 1,629,902,482,000 | 1,631,631,113,000 | 1,631,548,221,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2836",
"html_url": "https://github.com/huggingface/datasets/pull/2836",
"diff_url": "https://github.com/huggingface/datasets/pull/2836.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2836.patch",
"merged_at": 1631548221000
} | Optimize `Dataset.filter` to only compute the indices of the rows to keep, instead of creating a new Arrow table with the rows to keep. Creating a new table was an issue because it could take a lot of disk space.
This will be useful to process audio datasets for example cc @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2836/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2836/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2835 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2835/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2835/comments | https://api.github.com/repos/huggingface/datasets/issues/2835/events | https://github.com/huggingface/datasets/pull/2835 | 979,209,394 | MDExOlB1bGxSZXF1ZXN0NzE5NjUxOTE4 | 2,835 | Update: timit_asr - make the dataset streamable | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,629,901,369,000 | 1,631,020,547,000 | 1,631,020,546,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2835",
"html_url": "https://github.com/huggingface/datasets/pull/2835",
"diff_url": "https://github.com/huggingface/datasets/pull/2835.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2835.patch",
"merged_at": 1631020546000
} | The TIMIT ASR dataset had two issues that was preventing it from being streamable:
1. it was missing a call to `open` before `pd.read_csv`
2. it was using `os.path.dirname` which is not supported for streaming
I made the dataset streamable by using `open` to load the CSV, and by adding the support for `os.path.dirname` in dataset scripts to stream data
You can now do
```python
from datasets import load_dataset
timit_asr = load_dataset("timit_asr", streaming=True)
print(next(iter(timit_asr["train"])))
```
prints:
```json
{"file": "zip://data/TRAIN/DR4/MMDM0/SI681.WAV::https://data.deepai.org/timit.zip",
"phonetic_detail": {"start": [0, 1960, 2466, 3480, 4000, 5960, 7480, 7880, 9400, 9960, 10680, 13480, 15680, 15880, 16920, 18297, 18882, 19480, 21723, 22516, 24040, 25190, 27080, 28160, 28560, 30120, 31832, 33240, 34640, 35968, 37720],
"utterance": ["h#", "w", "ix", "dcl", "s", "ah", "tcl", "ch", "ix", "n", "ae", "kcl", "t", "ix", "v", "r", "ix", "f", "y", "ux", "zh", "el", "bcl", "b", "iy", "y", "ux", "s", "f", "el", "h#"],
"stop": [1960, 2466, 3480, 4000, 5960, 7480, 7880, 9400, 9960, 10680, 13480, 15680, 15880, 16920, 18297, 18882, 19480, 21723, 22516, 24040, 25190, 27080, 28160, 28560, 30120, 31832, 33240, 34640, 35968, 37720, 39920]},
"sentence_type": "SI", "id": "SI681",
"speaker_id": "MMDM0",
"dialect_region": "DR4",
"text": "Would such an act of refusal be useful?",
"word_detail": {
"start": [1960, 4000, 9400, 10680, 15880, 18297, 27080, 30120],
"utterance": ["would", "such", "an", "act", "of", "refusal", "be", "useful"],
"stop": [4000, 9400, 10680, 15880, 18297, 27080, 30120, 37720]
}}
```
cc @patrickvonplaten @vrindaprabhu | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2835/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2835/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2834 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2834/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2834/comments | https://api.github.com/repos/huggingface/datasets/issues/2834/events | https://github.com/huggingface/datasets/pull/2834 | 978,309,749 | MDExOlB1bGxSZXF1ZXN0NzE4OTE5NjQ0 | 2,834 | Fix IndexError by ignoring empty RecordBatch | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,629,824,773,000 | 1,629,825,678,000 | 1,629,825,678,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2834",
"html_url": "https://github.com/huggingface/datasets/pull/2834",
"diff_url": "https://github.com/huggingface/datasets/pull/2834.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2834.patch",
"merged_at": 1629825677000
} | We need to ignore the empty record batches for the interpolation search to work correctly when querying arrow tables
Close #2833
cc @SaulLu | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2834/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2834/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2833 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2833/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2833/comments | https://api.github.com/repos/huggingface/datasets/issues/2833/events | https://github.com/huggingface/datasets/issues/2833 | 978,296,140 | MDU6SXNzdWU5NzgyOTYxNDA= | 2,833 | IndexError when accessing first element of a Dataset if first RecordBatch is empty | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,629,823,760,000 | 1,629,825,677,000 | 1,629,825,677,000 | MEMBER | null | null | null | The computation of the offsets of the underlying Table of a Dataset has some issues if the first RecordBatch is empty.
```python
from datasets import Dataset
import pyarrow as pa
pa_table = pa.Table.from_pydict({"a": [1]})
pa_table2 = pa.Table.from_pydict({"a": []}, schema=pa_table.schema)
ds_table = pa.concat_tables([pa_table2, pa_table])
dataset = Dataset(ds_table)
print([len(b) for b in dataset.data._batches])
# [0, 1]
print(dataset.data._offsets)
# [0 0 1] (should be [0, 1])
dataset[0]
```
raises
```python
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/datasets/table.py in _interpolation_search(arr, x)
90 else:
91 i, j = i, k
---> 92 raise IndexError(f"Invalid query '{x}' for size {arr[-1] if len(arr) else 'none'}.")
93
94
IndexError: Invalid query '0' for size 1.
```
This can be fixed by ignoring empty batches when computing `table._batches` and `table._offsets`
cc @SaulLu | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2833/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2833/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2832 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2832/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2832/comments | https://api.github.com/repos/huggingface/datasets/issues/2832/events | https://github.com/huggingface/datasets/issues/2832 | 978,012,800 | MDU6SXNzdWU5NzgwMTI4MDA= | 2,832 | Logging levels not taken into account | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,629,805,841,000 | 1,629,805,841,000 | null | MEMBER | null | null | null | ## Describe the bug
The `logging` module isn't working as intended relative to the levels to set.
## Steps to reproduce the bug
```python
from datasets import logging
logging.set_verbosity_debug()
logger = logging.get_logger()
logger.error("ERROR")
logger.warning("WARNING")
logger.info("INFO")
logger.debug("DEBUG"
```
## Expected results
I expect all logs to be output since I'm putting a `debug` level.
## Actual results
Only the two first logs are output.
## Environment info
- `datasets` version: 1.11.0
- Platform: Linux-5.13.9-arch1-1-x86_64-with-glibc2.33
- Python version: 3.9.6
- PyArrow version: 5.0.0
## To go further
This logging issue appears in `datasets` but not in `transformers`. It happens because there is no handler defined for the logger. When no handler is defined, the `logging` library will output a one-off error to stderr, using a `StderrHandler` with level `WARNING`.
`transformers` sets a default `StreamHandler` [here](https://github.com/huggingface/transformers/blob/5c6eca71a983bae2589eed01e5c04fcf88ba5690/src/transformers/utils/logging.py#L86) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2832/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2831 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2831/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2831/comments | https://api.github.com/repos/huggingface/datasets/issues/2831/events | https://github.com/huggingface/datasets/issues/2831 | 977,864,600 | MDU6SXNzdWU5Nzc4NjQ2MDA= | 2,831 | ArrowInvalid when mapping dataset with missing values | {
"login": "uniquefine",
"id": 12694730,
"node_id": "MDQ6VXNlcjEyNjk0NzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/12694730?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uniquefine",
"html_url": "https://github.com/uniquefine",
"followers_url": "https://api.github.com/users/uniquefine/followers",
"following_url": "https://api.github.com/users/uniquefine/following{/other_user}",
"gists_url": "https://api.github.com/users/uniquefine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/uniquefine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uniquefine/subscriptions",
"organizations_url": "https://api.github.com/users/uniquefine/orgs",
"repos_url": "https://api.github.com/users/uniquefine/repos",
"events_url": "https://api.github.com/users/uniquefine/events{/privacy}",
"received_events_url": "https://api.github.com/users/uniquefine/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! It fails because of the feature type inference.\r\n\r\nBecause the first 1000 examples all have null values in the \"match\" field, then it infers that the type for this field is `null` type before writing the data on disk. But as soon as it tries to map an example with a non-null \"match\" field, then it fails.\r\n\r\nTo fix that you can either:\r\n- increase the writer_batch_size to >2000 (default is 1000) so that some non-null values will be in the first batch written to disk\r\n```python\r\ndatasets = datasets.map(lambda e: {'labels': e['match']}, remove_columns=['id'], writer_batch_size=2000)\r\n```\r\n- OR force the feature type with:\r\n```python\r\nfrom datasets import Features, Value\r\n\r\nfeatures = Features({\r\n 'conflict': Value('int64'),\r\n 'date': Value('string'),\r\n 'headline': Value('string'),\r\n 'match': Value('float64'),\r\n 'label': Value('float64')\r\n})\r\ndatasets = datasets.map(lambda e: {'labels': e['match']}, remove_columns=['id'], features=features)\r\n```"
] | 1,629,795,042,000 | 1,630,419,334,000 | null | NONE | null | null | null | ## Describe the bug
I encountered an `ArrowInvalid` when mapping dataset with missing values.
Here are the files for a minimal example. The exception is only thrown when the first line in the csv has a missing value (if you move the last line to the top it isn't thrown).
[data_small.csv](https://github.com/huggingface/datasets/files/7037838/data_small.csv)
[data.csv](https://github.com/huggingface/datasets/files/7037842/data.csv)
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset("csv", data_files=['data_small.csv'])
datasets = datasets.map(lambda e: {'labels': e['match']},
remove_columns=['id'])
```
## Expected results
No error
## Actual results
```
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Invalid null value
```
## Environment info
- `datasets` version: 1.5.0
- Platform: Linux-5.11.0-25-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.7.1+cpu (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2831/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2830 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2830/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2830/comments | https://api.github.com/repos/huggingface/datasets/issues/2830/events | https://github.com/huggingface/datasets/pull/2830 | 977,563,947 | MDExOlB1bGxSZXF1ZXN0NzE4MjkyMTM2 | 2,830 | Add imagefolder dataset | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"@lhoestq @albertvillanova it would be super cool if we could get the Image Classification task to work with this. I'm not sure how to have the dataset find the unique label names _after_ the dataset has been loaded. Is that even possible? \r\n\r\nMy hacky community version [here](https://huggingface.co/datasets/nateraw/image-folder) does this, but it wouldn't pass the test suite here. Any thoughts?",
"Hi ! Dataset builders that require some `data_files` like `csv` or `json` are handled differently that actual dataset scripts.\r\n\r\nIn particular:\r\n- they are placed directly in the `src` folder of the lib so that you can use it without internet connection (more exactly in `src/datasets/packaged_modules/<builder_name>.py`). So feel free to move the dataset python file there. You also need to register it in `src/datasets/packaked_modules.__init__.py`\r\n- they are handled a bit differently in our test suite (see the `PackagedDatasetTest` class in `test_dataset_common.py`). To be able to test the builder with your dummy data, you just need to modify `get_packaged_dataset_dummy_data_files` in `test_dataset_common.py` to return the right `data_files` for your builder. The dummy data can stay in `datasets/image_folder/dummy`\r\n\r\nLet me know if you have questions or if I can help !",
"Hey @lhoestq , I actually already did both of those things. I'm trying to get the `image-classification` task to work now. \r\n\r\nFor example...When you run `ds = load_dataset('imagefolder', data_files='my_files')`, with a directory called `./my_files` that looks like this:\r\n\r\n```\r\nmy_files\r\n----| Cat\r\n--------| image1.jpg\r\n--------| ...\r\n----| Dog\r\n--------| image1.jpg\r\n--------| ...\r\n```\r\n\r\n...We should set the dataset's `labels` feature to `datasets.features.ClassLabel(names=['cat', 'dog'])` dynamically with class names we find by getting a list of directories in `my_files` (via `data_files`). Otherwise the `datasets.tasks.ImageClassification` task will break, as the `labels` feature is not a `ClassLabel`.\r\n\r\nI couldn't figure out how to access the `data_files` in the builder's `_info` function in a way that would pass in the test suite. ",
"Nice ! Then maybe you can use `self.config.data_files` in `_info()` ?\r\nWhat error are you getting in the test suite ?\r\n\r\nAlso note that `data_files` was first developed to accept paths to actual files, not directories. In particular, it fetches the metadata of all the data_files to get a unique hash for the caching mechanism. So we may need to do a few changes first.",
"I'm trying to make it work by getting the label names in the _info automatically.\r\nI'll let you know tomorrow how it goes :)\r\n\r\nAlso cc @mariosasko since we're going to use #3163 \r\n\r\nRight now I'm getting the label name per file by taking the first word (from regex `\\w+`) after the common prefix of all the files per split",
"Data files resolution takes too much time on my side for a dataset of a few 10,000s of examples. I'll speed it up with some multihreading tomorrow, and maybe by removing the unnecessary checksum verification"
] | 1,629,761,646,000 | 1,636,999,012,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2830",
"html_url": "https://github.com/huggingface/datasets/pull/2830",
"diff_url": "https://github.com/huggingface/datasets/pull/2830.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2830.patch",
"merged_at": null
} | A generic imagefolder dataset inspired by `torchvision.datasets.ImageFolder`.
Resolves #2508
---
Example Usage:
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/nateraw/954fa8cba4ff806f6147a782fa9efd1a/imagefolder-official-example.ipynb) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2830/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2830/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2829 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2829/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2829/comments | https://api.github.com/repos/huggingface/datasets/issues/2829/events | https://github.com/huggingface/datasets/issues/2829 | 977,233,360 | MDU6SXNzdWU5NzcyMzMzNjA= | 2,829 | Optimize streaming from TAR archives | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] | open | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,629,737,800,000 | 1,634,032,949,000 | null | MEMBER | null | null | null | Hi ! As you know TAR has some constraints for data streaming. While it is optimized for buffering, the files in the TAR archive **need to be streamed in order**. It means that we can't choose which file to stream from, and this notation is to be avoided for TAR archives:
```
tar://books_large_p1.txt::https://storage.googleapis.com/huggingface-nlp/datasets/bookcorpus/bookcorpus.tar.bz2
```
Instead, I suggest we implement `iter_archive` for the `StreamingDownloadManager`.
The regular `DownloadManager` already has it.
Then we will have to update the json/txt/csv/etc. loaders to make them use `iter_archive` on TAR archives.
That's also what Tensorflow Datasets is doing in this case.
See this [dataset](https://github.com/tensorflow/datasets/blob/93895059c80a9e05805e8f32a2e310f66a23fc98/tensorflow_datasets/image_classification/flowers.py) for example.
Therefore instead of doing
```python
uncompressed = dl_manager.extract(tar_archive)
filename = "books_large_p1.txt"
with open(os.path.join(uncompressed, filename)) as f:
for line in f:
...
```
we'll do
```python
for filename, f in dl_manager.iter_archive(tar_archive):
for line in f:
...
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2829/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2828 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2828/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2828/comments | https://api.github.com/repos/huggingface/datasets/issues/2828/events | https://github.com/huggingface/datasets/pull/2828 | 977,181,517 | MDExOlB1bGxSZXF1ZXN0NzE3OTYwODg3 | 2,828 | Add code-mixed Kannada Hope speech dataset | {
"login": "adeepH",
"id": 46108405,
"node_id": "MDQ6VXNlcjQ2MTA4NDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/46108405?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adeepH",
"html_url": "https://github.com/adeepH",
"followers_url": "https://api.github.com/users/adeepH/followers",
"following_url": "https://api.github.com/users/adeepH/following{/other_user}",
"gists_url": "https://api.github.com/users/adeepH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adeepH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adeepH/subscriptions",
"organizations_url": "https://api.github.com/users/adeepH/orgs",
"repos_url": "https://api.github.com/users/adeepH/repos",
"events_url": "https://api.github.com/users/adeepH/events{/privacy}",
"received_events_url": "https://api.github.com/users/adeepH/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,629,734,109,000 | 1,633,108,863,000 | 1,633,108,863,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2828",
"html_url": "https://github.com/huggingface/datasets/pull/2828",
"diff_url": "https://github.com/huggingface/datasets/pull/2828.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2828.patch",
"merged_at": null
} | ## Adding a Dataset
- **Name:** *KanHope*
- **Description:** *A code-mixed English-Kannada dataset for Hope speech detection*
- **Paper:** *https://arxiv.org/abs/2108.04616*
- **Data:** *https://github.com/adeepH/KanHope/tree/main/dataset*
- **Motivation:** *The dataset is amongst the very few resources available for code-mixed low-resourced Dravidian languages of India* | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2828/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2827 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2827/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2827/comments | https://api.github.com/repos/huggingface/datasets/issues/2827/events | https://github.com/huggingface/datasets/pull/2827 | 976,976,552 | MDExOlB1bGxSZXF1ZXN0NzE3Nzg3MjEw | 2,827 | add a text classification dataset | {
"login": "adeepH",
"id": 46108405,
"node_id": "MDQ6VXNlcjQ2MTA4NDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/46108405?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adeepH",
"html_url": "https://github.com/adeepH",
"followers_url": "https://api.github.com/users/adeepH/followers",
"following_url": "https://api.github.com/users/adeepH/following{/other_user}",
"gists_url": "https://api.github.com/users/adeepH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adeepH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adeepH/subscriptions",
"organizations_url": "https://api.github.com/users/adeepH/orgs",
"repos_url": "https://api.github.com/users/adeepH/repos",
"events_url": "https://api.github.com/users/adeepH/events{/privacy}",
"received_events_url": "https://api.github.com/users/adeepH/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,629,721,481,000 | 1,629,733,878,000 | 1,629,733,878,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2827",
"html_url": "https://github.com/huggingface/datasets/pull/2827",
"diff_url": "https://github.com/huggingface/datasets/pull/2827.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2827.patch",
"merged_at": null
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2827/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2826 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2826/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2826/comments | https://api.github.com/repos/huggingface/datasets/issues/2826/events | https://github.com/huggingface/datasets/issues/2826 | 976,974,254 | MDU6SXNzdWU5NzY5NzQyNTQ= | 2,826 | Add a Text Classification dataset: KanHope | {
"login": "adeepH",
"id": 46108405,
"node_id": "MDQ6VXNlcjQ2MTA4NDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/46108405?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adeepH",
"html_url": "https://github.com/adeepH",
"followers_url": "https://api.github.com/users/adeepH/followers",
"following_url": "https://api.github.com/users/adeepH/following{/other_user}",
"gists_url": "https://api.github.com/users/adeepH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adeepH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adeepH/subscriptions",
"organizations_url": "https://api.github.com/users/adeepH/orgs",
"repos_url": "https://api.github.com/users/adeepH/repos",
"events_url": "https://api.github.com/users/adeepH/events{/privacy}",
"received_events_url": "https://api.github.com/users/adeepH/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"Hi ! In your script it looks like you're trying to load the dataset `bn_hate_speech,`, not KanHope.\r\n\r\nMoreover the error `KeyError: ' '` means that you have a feature of type ClassLabel, but for a certain example of the dataset, it looks like the label is empty (it's just a string with a space). Can you make sure that the data don't have missing labels, and that your dataset script parses the labels correctly ?"
] | 1,629,721,318,000 | 1,633,111,619,000 | 1,633,111,619,000 | CONTRIBUTOR | null | null | null | ## Adding a Dataset
- **Name:** *KanHope*
- **Description:** *A code-mixed English-Kannada dataset for Hope speech detection*
- **Paper:** *https://arxiv.org/abs/2108.04616* (I am the author of the paper}
- **Author:** *[AdeepH](https://github.com/adeepH)*
- **Data:** *https://github.com/adeepH/KanHope/tree/main/dataset*
- **Motivation:** *The dataset is amongst the very few resources available for code-mixed Dravidian languages*
- I tried following the steps as per the instructions. However, could not resolve an error. Any help would be appreciated.
- The dataset card and the scripts for the dataset *https://github.com/adeepH/datasets/tree/multilingual-hope-speech/datasets/mhs_eval*
```
Using custom data configuration default
Downloading and preparing dataset bn_hate_speech/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/bn_hate_speech/default/0.0.0/5f417ddc89777278abd29988f909f39495f0ec802090f7d8fa63b5bffb121762...
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-114-4a9cdb519e4c> in <module>()
1 from datasets import load_dataset
2
----> 3 data = load_dataset('/content/bn')
9 frames
/usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)
850 ignore_verifications=ignore_verifications,
851 try_from_hf_gcs=try_from_hf_gcs,
--> 852 use_auth_token=use_auth_token,
853 )
854
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
614 if not downloaded_from_gcs:
615 self._download_and_prepare(
--> 616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
617 )
618 # Sync info
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
691 try:
692 # Prepare split will record examples associated to the split
--> 693 self._prepare_split(split_generator, **prepare_split_kwargs)
694 except OSError as e:
695 raise OSError(
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split(self, split_generator)
1107 disable=bool(logging.get_verbosity() == logging.NOTSET),
1108 ):
-> 1109 example = self.info.features.encode_example(record)
1110 writer.write(example, key)
1111 finally:
/usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_example(self, example)
1015 """
1016 example = cast_to_python_objects(example)
-> 1017 return encode_nested_example(self, example)
1018
1019 def encode_batch(self, batch):
/usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_nested_example(schema, obj)
863 if isinstance(schema, dict):
864 return {
--> 865 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
866 }
867 elif isinstance(schema, (list, tuple)):
/usr/local/lib/python3.7/dist-packages/datasets/features.py in <dictcomp>(.0)
863 if isinstance(schema, dict):
864 return {
--> 865 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
866 }
867 elif isinstance(schema, (list, tuple)):
/usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_nested_example(schema, obj)
890 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks
891 elif isinstance(schema, (ClassLabel, TranslationVariableLanguages, Value, _ArrayXD)):
--> 892 return schema.encode_example(obj)
893 # Other object should be directly convertible to a native Arrow type (like Translation and Translation)
894 return obj
/usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_example(self, example_data)
665 # If a string is given, convert to associated integer
666 if isinstance(example_data, str):
--> 667 example_data = self.str2int(example_data)
668
669 # Allowing -1 to mean no label.
/usr/local/lib/python3.7/dist-packages/datasets/features.py in str2int(self, values)
623 if value not in self._str2int:
624 value = str(value).strip()
--> 625 output.append(self._str2int[str(value)])
626 else:
627 # No names provided, try to integerize
KeyError: ' '
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2826/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2825 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2825/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2825/comments | https://api.github.com/repos/huggingface/datasets/issues/2825/events | https://github.com/huggingface/datasets/issues/2825 | 976,584,926 | MDU6SXNzdWU5NzY1ODQ5MjY= | 2,825 | The datasets.map function does not load cached dataset after moving python script | {
"login": "hobbitlzy",
"id": 35392624,
"node_id": "MDQ6VXNlcjM1MzkyNjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/35392624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hobbitlzy",
"html_url": "https://github.com/hobbitlzy",
"followers_url": "https://api.github.com/users/hobbitlzy/followers",
"following_url": "https://api.github.com/users/hobbitlzy/following{/other_user}",
"gists_url": "https://api.github.com/users/hobbitlzy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hobbitlzy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hobbitlzy/subscriptions",
"organizations_url": "https://api.github.com/users/hobbitlzy/orgs",
"repos_url": "https://api.github.com/users/hobbitlzy/repos",
"events_url": "https://api.github.com/users/hobbitlzy/events{/privacy}",
"received_events_url": "https://api.github.com/users/hobbitlzy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"This also happened to me on COLAB.\r\nDetails:\r\nI ran the `run_mlm.py` in two different notebooks. \r\nIn the first notebook, I do tokenization since I can get 4 CPU cores without any GPUs, and save the cache into a folder which I copy to drive.\r\nIn the second notebook, I copy the cache folder from drive and re-run the run_mlm.py script (this time I uncomment the trainer code which happens after the tokenization)\r\n\r\nNote: I didn't change anything in the arguments, not even the preprocessing_num_workers\r\n ",
"Thanks for reporting ! This is indeed a bug, I'm looking into it",
"#2854 fixed the issue :)\r\n\r\nWe'll do a new release of `datasets` soon to make the fix available.\r\nIn the meantime, feel free to try it out by installing `datasets` from source\r\n\r\nIf you have other issues or any question, feel free to re-open the issue :)"
] | 1,629,689,017,000 | 1,630,415,681,000 | 1,630,415,616,000 | NONE | null | null | null | ## Describe the bug
The datasets.map function caches the processed data to a certain directory. When the map function is called another time with totally the same parameters, the cached data are supposed to be reloaded instead of re-processing. However, it doesn't reuse cached data sometimes. I use the common data processing in different tasks, the datasets are processed again, the only difference is that I run them in different files.
## Steps to reproduce the bug
Just run the following codes in different .py files.
```python
if __name__ == '__main__':
from datasets import load_dataset
from transformers import AutoTokenizer
raw_datasets = load_dataset("wikitext", "wikitext-2-raw-v1")
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
```
## Expected results
The map function should reload data in the second or any later runs.
## Actual results
The processing happens in each run.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.0
- Platform: linux
- Python version: 3.7.6
- PyArrow version: 3.0.0
This is the first time I report a bug. If there is any problem or confusing description, please let me know 😄.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2825/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2825/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2824 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2824/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2824/comments | https://api.github.com/repos/huggingface/datasets/issues/2824/events | https://github.com/huggingface/datasets/pull/2824 | 976,394,721 | MDExOlB1bGxSZXF1ZXN0NzE3MzIyMzY5 | 2,824 | Fix defaults in cache_dir docstring in load.py | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,629,643,717,000 | 1,629,984,212,000 | 1,629,978,916,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2824",
"html_url": "https://github.com/huggingface/datasets/pull/2824",
"diff_url": "https://github.com/huggingface/datasets/pull/2824.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2824.patch",
"merged_at": 1629978916000
} | Fix defaults in the `cache_dir` docstring. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2824/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2823 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2823/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2823/comments | https://api.github.com/repos/huggingface/datasets/issues/2823/events | https://github.com/huggingface/datasets/issues/2823 | 976,135,355 | MDU6SXNzdWU5NzYxMzUzNTU= | 2,823 | HF_DATASETS_CACHE variable in Windows | {
"login": "rp2839",
"id": 8453798,
"node_id": "MDQ6VXNlcjg0NTM3OTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8453798?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rp2839",
"html_url": "https://github.com/rp2839",
"followers_url": "https://api.github.com/users/rp2839/followers",
"following_url": "https://api.github.com/users/rp2839/following{/other_user}",
"gists_url": "https://api.github.com/users/rp2839/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rp2839/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rp2839/subscriptions",
"organizations_url": "https://api.github.com/users/rp2839/orgs",
"repos_url": "https://api.github.com/users/rp2839/repos",
"events_url": "https://api.github.com/users/rp2839/events{/privacy}",
"received_events_url": "https://api.github.com/users/rp2839/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Agh - I'm a muppet. No quote marks are needed.\r\nset HF_DATASETS_CACHE = C:\\Datasets\r\nworks as intended."
] | 1,629,551,864,000 | 1,629,552,011,000 | 1,629,552,011,000 | NONE | null | null | null | I can't seem to use a custom Cache directory in Windows. I have tried:
set HF_DATASETS_CACHE = "C:\Datasets"
set HF_DATASETS_CACHE = "C:/Datasets"
set HF_DATASETS_CACHE = "C:\\Datasets"
set HF_DATASETS_CACHE = "r'C:\Datasets'"
set HF_DATASETS_CACHE = "\Datasets"
set HF_DATASETS_CACHE = "/Datasets"
In each instance I get the "[WinError 123] The filename, directory name, or volume label syntax is incorrect" error when attempting to load a dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2823/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2822 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2822/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2822/comments | https://api.github.com/repos/huggingface/datasets/issues/2822/events | https://github.com/huggingface/datasets/pull/2822 | 975,744,463 | MDExOlB1bGxSZXF1ZXN0NzE2ODUxMTAy | 2,822 | Add url prefix convention for many compression formats | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for the feedback :) I will also complete the documentation to explain this convention",
"I just added some documentation about how streaming works with chained URLs.\r\n\r\nI will also add some docs about how to use chained URLs directly in `load_dataset` in #2662, since #2662 does change the documentation already and to avoid having to resolve conflicts.",
"Merging this one now, next step is resolve the conflicts in #2662 and update the docs for URL chaining :)\r\n\r\nThere is also the glob feature of zip files that I need to add, to be able to do this for example:\r\n```python\r\nload_dataset(\"json\", data_files=\"zip://*::https://foo.bar/archive.zip\")\r\n```"
] | 1,629,475,883,000 | 1,629,734,356,000 | 1,629,734,354,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2822",
"html_url": "https://github.com/huggingface/datasets/pull/2822",
"diff_url": "https://github.com/huggingface/datasets/pull/2822.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2822.patch",
"merged_at": 1629734354000
} | ## Intro
When doing dataset streaming, the uncompression of compressed files is done on the fly using `fsspec`.
In particular, the download manager method `download_and_extract` doesn't return a path to the local download and extracted file, but instead a chained URL so that the uncompression can be done when the file is opened. A few examples of chained URLS:
- `gz://file.txt::https://foo.bar/file.txt.gz`
- `bz2://file.txt::https://foo.bar/file.txt.bz2`
- `zip://::https://foo.bar/archive.zip`
- `tar://::https://foo.bar/archive.tar.gz` (the TAR uncompression includes gz, bz2 etc. uncompression in `fsspec`)
This syntax is highly inspired by the `fsspec` URL chaining syntax from https://filesystem-spec.readthedocs.io/en/latest/features.html#url-chaining
This url prefixing allows `open` to know what kind of uncompression to do in a dataset script when doing
```python
def _generate_examples(self, urlpath):
with open(urlpath) as f:
....
```
## What it changes
This changes the previous behavior from https://github.com/huggingface/datasets/pull/2786 , in which `open` was trying to infer the compression automatically. Infering the compression made it impossible to know whether the user wanted `open` to return compressed data (as the default behavior of the buitin open), or the uncompressed data. By adding uncompression prefixes to the URL, `open` know directly if it has to uncompress or not, and also which protocol to use.
## Additional notes
This PR should close https://github.com/huggingface/datasets/issues/2813
It should also close this PR https://github.com/huggingface/datasets/pull/2811 since the oscar dataset script won't try to uncompress twice anymore
Note that I had to temporarily remove the support for passing tar and zip files to `data_files` for streaming to make it work, since it makes it ambiguous whether a zip file passed as `data_files` should be uncompressed or not. IMO we can make it work again by changing the syntax to make the glob explicit:
```python
load_dataset("json", data_files="zip://*.jsonl::https://foo.bar/archive.zip")
```
This is the exact same convention as fsspec and it removes all ambiguities
cc @albertvillanova @lewtun | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2822/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2821 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2821/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2821/comments | https://api.github.com/repos/huggingface/datasets/issues/2821/events | https://github.com/huggingface/datasets/issues/2821 | 975,556,032 | MDU6SXNzdWU5NzU1NTYwMzI= | 2,821 | Cannot load linnaeus dataset | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Thanks for reporting ! #2852 fixed this error\r\n\r\nWe'll do a new release of `datasets` soon :)"
] | 1,629,461,715,000 | 1,630,415,582,000 | 1,630,415,529,000 | NONE | null | null | null | ## Describe the bug
The [linnaeus](https://huggingface.co/datasets/linnaeus) dataset cannot be loaded. To reproduce:
```
from datasets import load_dataset
datasets = load_dataset("linnaeus")
```
This results in:
```
Downloading and preparing dataset linnaeus/linnaeus (download: 17.36 MiB, generated: 8.74 MiB, post-processed: Unknown size, total: 26.10 MiB) to /root/.cache/huggingface/datasets/linnaeus/linnaeus/1.0.0/2ff05dbc256108233262f596e09e322dbc3db067202de14286913607cd9cb704...
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
<ipython-input-4-7ef3a88f6276> in <module>()
1 from datasets import load_dataset
2
----> 3 datasets = load_dataset("linnaeus")
11 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
603 raise FileNotFoundError("Couldn't find file at {}".format(url))
604 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
--> 605 raise ConnectionError("Couldn't reach {}".format(url))
606
607 # Try a second time
ConnectionError: Couldn't reach https://drive.google.com/u/0/uc?id=1OletxmPYNkz2ltOr9pyT0b0iBtUWxslh&export=download/
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2821/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2821/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2820 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2820/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2820/comments | https://api.github.com/repos/huggingface/datasets/issues/2820/events | https://github.com/huggingface/datasets/issues/2820 | 975,210,712 | MDU6SXNzdWU5NzUyMTA3MTI= | 2,820 | Downloading “reddit” dataset keeps timing out. | {
"login": "smeyerhot",
"id": 43877130,
"node_id": "MDQ6VXNlcjQzODc3MTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/43877130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smeyerhot",
"html_url": "https://github.com/smeyerhot",
"followers_url": "https://api.github.com/users/smeyerhot/followers",
"following_url": "https://api.github.com/users/smeyerhot/following{/other_user}",
"gists_url": "https://api.github.com/users/smeyerhot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/smeyerhot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smeyerhot/subscriptions",
"organizations_url": "https://api.github.com/users/smeyerhot/orgs",
"repos_url": "https://api.github.com/users/smeyerhot/repos",
"events_url": "https://api.github.com/users/smeyerhot/events{/privacy}",
"received_events_url": "https://api.github.com/users/smeyerhot/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"```\r\nUsing custom data configuration default\r\nDownloading and preparing dataset reddit/default (download: 2.93 GiB, generated: 17.64 GiB, post-processed: Unknown size, total: 20.57 GiB) to /Volumes/My Passport for Mac/og-chat-data/reddit/default/1.0.0/98ba5abea674d3178f7588aa6518a5510dc0c6fa8176d9653a3546d5afcb3969...\r\nDownloading: 13%\r\n403M/3.14G [44:39<2:27:09, 310kB/s]\r\n---------------------------------------------------------------------------\r\ntimeout Traceback (most recent call last)\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in _error_catcher(self)\r\n 437 try:\r\n--> 438 yield\r\n 439 \r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in read(self, amt, decode_content, cache_content)\r\n 518 cache_content = False\r\n--> 519 data = self._fp.read(amt) if not fp_closed else b\"\"\r\n 520 if (\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/http/client.py in read(self, amt)\r\n 458 b = bytearray(amt)\r\n--> 459 n = self.readinto(b)\r\n 460 return memoryview(b)[:n].tobytes()\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/http/client.py in readinto(self, b)\r\n 502 # (for example, reading in 1k chunks)\r\n--> 503 n = self.fp.readinto(b)\r\n 504 if not n and b:\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/socket.py in readinto(self, b)\r\n 703 try:\r\n--> 704 return self._sock.recv_into(b)\r\n 705 except timeout:\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/ssl.py in recv_into(self, buffer, nbytes, flags)\r\n 1240 self.__class__)\r\n-> 1241 return self.read(nbytes, buffer)\r\n 1242 else:\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/ssl.py in read(self, len, buffer)\r\n 1098 if buffer is not None:\r\n-> 1099 return self._sslobj.read(len, buffer)\r\n 1100 else:\r\n\r\ntimeout: The read operation timed out\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nReadTimeoutError Traceback (most recent call last)\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/requests/models.py in generate()\r\n 757 try:\r\n--> 758 for chunk in self.raw.stream(chunk_size, decode_content=True):\r\n 759 yield chunk\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in stream(self, amt, decode_content)\r\n 575 while not is_fp_closed(self._fp):\r\n--> 576 data = self.read(amt=amt, decode_content=decode_content)\r\n 577 \r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in read(self, amt, decode_content, cache_content)\r\n 540 # Content-Length are caught.\r\n--> 541 raise IncompleteRead(self._fp_bytes_read, self.length_remaining)\r\n 542 \r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/contextlib.py in __exit__(self, type, value, traceback)\r\n 134 try:\r\n--> 135 self.gen.throw(type, value, traceback)\r\n 136 except StopIteration as exc:\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in _error_catcher(self)\r\n 442 # there is yet no clean way to get at it from this context.\r\n--> 443 raise ReadTimeoutError(self._pool, None, \"Read timed out.\")\r\n 444 \r\n\r\nReadTimeoutError: HTTPSConnectionPool(host='zenodo.org', port=443): Read timed out.\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nConnectionError Traceback (most recent call last)\r\n/var/folders/3f/md0t9sgj6rz8xy01fskttqdc0000gn/T/ipykernel_89016/1133441872.py in <module>\r\n 1 from datasets import load_dataset\r\n 2 \r\n----> 3 dataset = load_dataset(\"reddit\", ignore_verifications=True, cache_dir=\"/Volumes/My Passport for Mac/og-chat-data\")\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)\r\n 845 \r\n 846 # Download and prepare data\r\n--> 847 builder_instance.download_and_prepare(\r\n 848 download_config=download_config,\r\n 849 download_mode=download_mode,\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 613 logger.warning(\"HF google storage unreachable. Downloading and preparing it from source\")\r\n 614 if not downloaded_from_gcs:\r\n--> 615 self._download_and_prepare(\r\n 616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 617 )\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 669 split_dict = SplitDict(dataset_name=self.name)\r\n 670 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 671 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 672 \r\n 673 # Checksums verification\r\n\r\n~/.cache/huggingface/modules/datasets_modules/datasets/reddit/98ba5abea674d3178f7588aa6518a5510dc0c6fa8176d9653a3546d5afcb3969/reddit.py in _split_generators(self, dl_manager)\r\n 73 def _split_generators(self, dl_manager):\r\n 74 \"\"\"Returns SplitGenerators.\"\"\"\r\n---> 75 dl_path = dl_manager.download_and_extract(_URL)\r\n 76 return [\r\n 77 datasets.SplitGenerator(\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/download_manager.py in download_and_extract(self, url_or_urls)\r\n 287 extracted_path(s): `str`, extracted paths of given URL(s).\r\n 288 \"\"\"\r\n--> 289 return self.extract(self.download(url_or_urls))\r\n 290 \r\n 291 def get_recorded_sizes_checksums(self):\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/download_manager.py in download(self, url_or_urls)\r\n 195 \r\n 196 start_time = datetime.now()\r\n--> 197 downloaded_path_or_paths = map_nested(\r\n 198 download_func,\r\n 199 url_or_urls,\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)\r\n 194 # Singleton\r\n 195 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 196 return function(data_struct)\r\n 197 \r\n 198 disable_tqdm = bool(logger.getEffectiveLevel() > logging.INFO) or not utils.is_progress_bar_enabled()\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/download_manager.py in _download(self, url_or_filename, download_config)\r\n 218 # append the relative path to the base_path\r\n 219 url_or_filename = url_or_path_join(self._base_path, url_or_filename)\r\n--> 220 return cached_path(url_or_filename, download_config=download_config)\r\n 221 \r\n 222 def iter_archive(self, path):\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)\r\n 286 if is_remote_url(url_or_filename):\r\n 287 # URL, so get it from the cache (downloading if necessary)\r\n--> 288 output_path = get_from_cache(\r\n 289 url_or_filename,\r\n 290 cache_dir=cache_dir,\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)\r\n 643 ftp_get(url, temp_file)\r\n 644 else:\r\n--> 645 http_get(\r\n 646 url,\r\n 647 temp_file,\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/file_utils.py in http_get(url, temp_file, proxies, resume_size, headers, cookies, timeout, max_retries)\r\n 451 disable=bool(logging.get_verbosity() == logging.NOTSET),\r\n 452 )\r\n--> 453 for chunk in response.iter_content(chunk_size=1024):\r\n 454 if chunk: # filter out keep-alive new chunks\r\n 455 progress.update(len(chunk))\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/requests/models.py in generate()\r\n 763 raise ContentDecodingError(e)\r\n 764 except ReadTimeoutError as e:\r\n--> 765 raise ConnectionError(e)\r\n 766 else:\r\n 767 # Standard file-like object.\r\n\r\nConnectionError: HTTPSConnectionPool(host='zenodo.org', port=443): Read timed out.\r\n```",
"Hey @lhoestq should I try to fix this issue ?",
"It also doesn't seem to be \"smart caching\" and I received an error about a file not being found...",
"To be clear, the error I get when I try to \"re-instantiate\" the download after failure is: \r\n```\r\nOSError: Cannot find data file. \r\nOriginal error:\r\n[Errno 20] Not a directory: <HOME>/.cache/huggingface/datasets/downloads/1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c/corpus-webis-tldr-17.json'\r\n```",
"Here is a new error:\r\n```\r\nConnectionError: Couldn't reach https://zenodo.org/record/1043504/files/corpus-webis-tldr-17.zip?download=1\r\n```",
"Hi ! Since https://github.com/huggingface/datasets/pull/2803 we've changed the time out from 10sec to 100sec.\r\nThis should prevent the `ReadTimeoutError`. Feel free to try it out by installing `datasets` from source\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```\r\n\r\nWhen re-running your code you said you get a `OSError`, could you try deleting the file at the path returned by the error ? (the one after `[Errno 20] Not a directory:`). Ideally when a download fails you should be able to re-run it without error; there might be an issue here.\r\n\r\nFinally not sure what we can do about `ConnectionError`, this must be an issue from zenodo. If it happens you simply need to try again\r\n",
"@lhoestq thanks for the update. The directory specified by the OSError ie. \r\n```\r\n1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c/corpus-webis-tldr-17.json \r\n```\r\n was not actually in that directory so I can't delete it. ",
"Oh, then could you try deleting the parent directory `1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c` instead ?\r\nThis way the download manager will know that it has to uncompress the data again",
"It seems to have worked. It only took like 20min! I think the extra timeout length did the trick! One thing is that it downloaded a total of 41gb instead of 20gb but at least it finished. ",
"Great ! The timeout change will be available in the next release of `datasets` :)"
] | 1,629,427,956,000 | 1,631,112,722,000 | 1,631,112,722,000 | NONE | null | null | null | ## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("reddit", ignore_verifications=True, cache_dir="/Volumes/My Passport for Mac/og-chat-data")
```
## Expected results
A clear and concise description of the expected results.
I would expect the download to finish, or at least provide a parameter to extend the read timeout window.
## Actual results
Specify the actual results or traceback.
Shown below in error message.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform: macOS
- Python version: 3.9.6 (conda env)
- PyArrow version: N/A
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2820/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2819 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2819/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2819/comments | https://api.github.com/repos/huggingface/datasets/issues/2819/events | https://github.com/huggingface/datasets/pull/2819 | 974,683,155 | MDExOlB1bGxSZXF1ZXN0NzE1OTUyMjE1 | 2,819 | Added XL-Sum dataset | {
"login": "abhik1505040",
"id": 49608995,
"node_id": "MDQ6VXNlcjQ5NjA4OTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/49608995?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhik1505040",
"html_url": "https://github.com/abhik1505040",
"followers_url": "https://api.github.com/users/abhik1505040/followers",
"following_url": "https://api.github.com/users/abhik1505040/following{/other_user}",
"gists_url": "https://api.github.com/users/abhik1505040/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhik1505040/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhik1505040/subscriptions",
"organizations_url": "https://api.github.com/users/abhik1505040/orgs",
"repos_url": "https://api.github.com/users/abhik1505040/repos",
"events_url": "https://api.github.com/users/abhik1505040/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhik1505040/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for adding this one ! I just did some minor changes and set the timeout back to 100sec instead of 1000",
"The CI failure is unrelated to this PR - let me take a look",
"> Thanks for adding this one! I just did some minor changes and set the timeout back to 100sec instead of 1000\r\n\r\nThank you for updating the language tags. I tried timeout values up to 300 sec on my local machine, but some of the larger files still get timed out. Although this could have been a network issue on my end, have you verified that 100 sec works for all files?",
"Well the main issue with google drive - even before the time out issues - is that it has a daily quota of downloads per file.\r\nTherefore if many people start downloading this dataset, it will be unavailable until the quota is reset the next day.\r\n\r\nSo ideally it would be nice if the data were hosted elsewhere than Google drive, to avoid the quota and time out issue.\r\nHF can probably help with hosting the data if needed",
"> Well the main issue with google drive - even before the time out issues - is that it has a daily quota of downloads per file.\r\n> Therefore if many people start downloading this dataset, it will be unavailable until the quota is reset the next day.\r\n> \r\n> So ideally it would be nice if the data were hosted elsewhere than Google drive, to avoid the quota and time out issue.\r\n> HF can probably help with hosting the data if needed\r\n\r\nIt'd be great if the dataset can be hosted in HF. How should I proceed here though? Upload the dataset files as a community dataset and update the links in this pull request or is there a more straightforward way?",
"Hi ! Ideally everything should be in the same place, so feel free to create a community dataset on the Hub and upload your data files as well as you dataset script (and also the readme.md and dataset_infos.json).\r\n\r\nThe only change you have to do in your dataset script is use a relative path to your data files instead of urls.\r\nFor example if your repository looks like this:\r\n```\r\nxlsum/\r\n├── data/\r\n│ ├── amharic_XLSum_v2.0.tar.bz2\r\n│ ├── ...\r\n│ └── yoruba_XLSum_v2.0.tar.bz2\r\n├── xlsum.py\r\n├── README.md\r\n└── dataset_infos.json\r\n```\r\nThen you just need to pass `\"data/amharic_XLSum_v2.0.tar.bz2\"` to `dl_manager.download_and_extract(...)`, instead of an url.\r\n\r\nLocally you can test that it's working as expected with\r\n```python\r\nload_dataset(\"path/to/my/directory/named/xlsum\")\r\n```\r\n\r\nThen once it's on the Hub, you can load it with\r\n```python\r\nload_dataset(\"username/xlsum\")\r\n```\r\n\r\nLet me know if you have questions :)",
"Thank you for your detailed response regarding the community dataset building process. However, will this pull request be merged into the main branch?",
"If XL-sum is available via the Hub we don't need to add it again in the `datasets` github repo ;)",
"The dataset has now been uploaded on HF hub. It's available at https://huggingface.co/datasets/csebuetnlp/xlsum. Closing this pull request. Thank you for your contributions. ",
"Thank you !"
] | 1,629,380,865,000 | 1,632,903,224,000 | 1,632,419,345,000 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2819",
"html_url": "https://github.com/huggingface/datasets/pull/2819",
"diff_url": "https://github.com/huggingface/datasets/pull/2819.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2819.patch",
"merged_at": null
} | Added XL-Sum dataset published in ACL-IJCNLP 2021. (https://aclanthology.org/2021.findings-acl.413/). The default timeout values in `src/datasets/utils/file_utls.py` were increased to enable downloading from the original google drive links. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2819/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2818 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2818/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2818/comments | https://api.github.com/repos/huggingface/datasets/issues/2818/events | https://github.com/huggingface/datasets/issues/2818 | 974,552,009 | MDU6SXNzdWU5NzQ1NTIwMDk= | 2,818 | cannot load data from my loacal path | {
"login": "yang-collect",
"id": 46920280,
"node_id": "MDQ6VXNlcjQ2OTIwMjgw",
"avatar_url": "https://avatars.githubusercontent.com/u/46920280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yang-collect",
"html_url": "https://github.com/yang-collect",
"followers_url": "https://api.github.com/users/yang-collect/followers",
"following_url": "https://api.github.com/users/yang-collect/following{/other_user}",
"gists_url": "https://api.github.com/users/yang-collect/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yang-collect/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yang-collect/subscriptions",
"organizations_url": "https://api.github.com/users/yang-collect/orgs",
"repos_url": "https://api.github.com/users/yang-collect/repos",
"events_url": "https://api.github.com/users/yang-collect/events{/privacy}",
"received_events_url": "https://api.github.com/users/yang-collect/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! The `data_files` parameter must be a string, a list/tuple or a python dict.\r\n\r\nCan you check the type of your `config.train_path` please ? Or use `data_files=str(config.train_path)` ?"
] | 1,629,371,610,000 | 1,630,399,576,000 | null | NONE | null | null | null | ## Describe the bug
I just want to directly load data from my local path,but find a bug.And I compare it with pandas to provide my local path is real.
here is my code
```python3
# print my local path
print(config.train_path)
# read data and print data length
tarin=pd.read_csv(config.train_path)
print(len(tarin))
# loading data by load_dataset
data = load_dataset('csv',data_files=config.train_path)
print(len(data))
```
## Steps to reproduce the bug
```python
C:\Users\wie\Documents\项目\文本分类\data\train.csv
7613
Traceback (most recent call last):
File "c:/Users/wie/Documents/项目/文本分类/lib/DataPrecess.py", line 17, in <module>
data = load_dataset('csv',data_files=config.train_path)
File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\load.py", line 830, in load_dataset
**config_kwargs,
File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\load.py", line 710, in load_dataset_builder
**config_kwargs,
File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\builder.py", line 271, in __init__
**config_kwargs,
File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\builder.py", line 386, in _create_builder_config
config_kwargs, custom_features=custom_features, use_auth_token=self.use_auth_token
File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\builder.py", line 156, in create_config_id
raise ValueError("Please provide a valid `data_files` in `DatasetBuilder`")
ValueError: Please provide a valid `data_files` in `DatasetBuilder`
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform: win10
- Python version: 3.7.9
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2818/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2817 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2817/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2817/comments | https://api.github.com/repos/huggingface/datasets/issues/2817/events | https://github.com/huggingface/datasets/pull/2817 | 974,486,051 | MDExOlB1bGxSZXF1ZXN0NzE1NzgzMDQ3 | 2,817 | Rename The Pile subsets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Sounds good. Should we also have a “the_pile” dataset with the subsets as configuration?",
"I think the main `the_pile` datasets will be the one that is the mix of all the subsets: https://the-eye.eu/public/AI/pile/\r\n\r\nWe can also add configurations for each subset, and even allow users to specify the subsets they want:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"the_pile\", subsets=[\"openwebtext2\", \"books3\", \"hn\"])\r\n```\r\n\r\nWe're alrady doing something similar for mC4, where users can specify the list of languages they want to load."
] | 1,629,366,982,000 | 1,629,735,850,000 | 1,629,735,849,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2817",
"html_url": "https://github.com/huggingface/datasets/pull/2817",
"diff_url": "https://github.com/huggingface/datasets/pull/2817.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2817.patch",
"merged_at": 1629735849000
} | After discussing with @yjernite we think it's better to have the subsets of The Pile explicitly have "the_pile" in their names.
I'm doing the changes for the subsets that @richarddwang added:
- [x] books3 -> the_pile_books3 https://github.com/huggingface/datasets/pull/2801
- [x] stack_exchange -> the_pile_stack_exchange https://github.com/huggingface/datasets/pull/2803
- [x] openwebtext2 -> the_pile_openwebtext2 https://github.com/huggingface/datasets/pull/2802
For consistency we should also rename `bookcorpusopen` to `the_pile_bookcorpus` IMO, but let me know what you think.
(we can just add a deprecation message to `bookcorpusopen` for now and add `the_pile_bookcorpus`) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2817/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2816 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2816/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2816/comments | https://api.github.com/repos/huggingface/datasets/issues/2816/events | https://github.com/huggingface/datasets/issues/2816 | 974,031,404 | MDU6SXNzdWU5NzQwMzE0MDQ= | 2,816 | Add Mostly Basic Python Problems Dataset | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [
"I started working on that."
] | 1,629,318,519,000 | 1,631,261,060,000 | null | NONE | null | null | null | ## Adding a Dataset
- **Name:** Mostly Basic Python Problems Dataset
- **Description:** The benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry level programmers, covering programming fundamentals, standard library functionality, and so on. Each problem consists of a task description, code solution and 3 automated test cases.
- **Paper:** *link to the dataset paper if available*
- **Data:** https://github.com/google-research/google-research/tree/master/mbpp
- **Motivation:** Simple, small dataset related to coding problems.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2816/reactions",
"total_count": 3,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2816/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2815 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2815/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2815/comments | https://api.github.com/repos/huggingface/datasets/issues/2815/events | https://github.com/huggingface/datasets/pull/2815 | 973,862,024 | MDExOlB1bGxSZXF1ZXN0NzE1MjUxNDQ5 | 2,815 | Tiny typo fixes of "fo" -> "of" | {
"login": "aronszanto",
"id": 9934829,
"node_id": "MDQ6VXNlcjk5MzQ4Mjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9934829?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aronszanto",
"html_url": "https://github.com/aronszanto",
"followers_url": "https://api.github.com/users/aronszanto/followers",
"following_url": "https://api.github.com/users/aronszanto/following{/other_user}",
"gists_url": "https://api.github.com/users/aronszanto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aronszanto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aronszanto/subscriptions",
"organizations_url": "https://api.github.com/users/aronszanto/orgs",
"repos_url": "https://api.github.com/users/aronszanto/repos",
"events_url": "https://api.github.com/users/aronszanto/events{/privacy}",
"received_events_url": "https://api.github.com/users/aronszanto/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,629,304,571,000 | 1,629,360,182,000 | 1,629,360,182,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2815",
"html_url": "https://github.com/huggingface/datasets/pull/2815",
"diff_url": "https://github.com/huggingface/datasets/pull/2815.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2815.patch",
"merged_at": 1629360182000
} | Noticed a few of these when reading docs- feel free to ignore the PR and just fix on some main contributor branch if more helpful. Thanks for the great library! :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2815/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2814 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2814/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2814/comments | https://api.github.com/repos/huggingface/datasets/issues/2814/events | https://github.com/huggingface/datasets/pull/2814 | 973,632,645 | MDExOlB1bGxSZXF1ZXN0NzE1MDUwODc4 | 2,814 | Bump tqdm version | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,629,291,089,000 | 1,629,294,251,000 | 1,629,293,990,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2814",
"html_url": "https://github.com/huggingface/datasets/pull/2814",
"diff_url": "https://github.com/huggingface/datasets/pull/2814.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2814.patch",
"merged_at": 1629293989000
} | The recently released tqdm 4.62.1 includes a fix for PermissionError on Windows (submitted by me in https://github.com/tqdm/tqdm/pull/1207), which means we can remove expensive `gc.collect` calls by bumping tqdm to that version. This PR does exactly that and, additionally, fixes a `disable_tqdm` definition that would previously, if used, raise a PermissionError on Windows. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2814/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2814/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2813 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2813/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2813/comments | https://api.github.com/repos/huggingface/datasets/issues/2813/events | https://github.com/huggingface/datasets/issues/2813 | 973,470,580 | MDU6SXNzdWU5NzM0NzA1ODA= | 2,813 | Remove compression from xopen | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | closed | false | null | [] | null | [
"After discussing with @lhoestq, a reasonable alternative:\r\n- `download_manager.extract(urlpath)` adds prefixes to `urlpath` in the same way as `fsspec` does for protocols, but we implement custom prefixes for all compression formats: \r\n `bz2::http://domain.org/filename.bz2`\r\n- `xopen` parses the `urlpath` and extracts the `compression` parameter and passes it to `fsspec.open`:\r\n `fsspec.open(\"http://domain.org/filename.bz2\", compression=\"bz2\")`\r\n\r\nPros:\r\n- clean solution that continues giving support to all compression formats\r\n- no breaking change when opening non-decompressed files: if no compression-protocol-like is passed, fsspec.open does not uncompress (passes compression=None)\r\n\r\nCons:\r\n- we create a \"private\" convention for the format of `urlpath`: although similar to `fsspec` protocols, we add custom prefixes for the `compression` argument"
] | 1,629,279,359,000 | 1,629,734,354,000 | 1,629,734,354,000 | MEMBER | null | null | null | We implemented support for streaming with 2 requirements:
- transparent use for the end user: just needs to pass the parameter `streaming=True`
- no additional work for the contributors: previous loading scripts should also work in streaming mode with no (or minor) changes; and new loading scripts should not involve additional code to support streaming
In order to fulfill these requirements, streaming implementation patched some Python functions:
- the `open(urlpath)` function was patched with `fsspec.open(urlpath)`
- the `os.path.join(urlpath, *others)` function was patched in order to add to `urlpath` hops (`::`) and extractor protocols (`zip://`), which are required by `fsspec.open`
Recently, we implemented support for streaming all archive+compression formats: zip, tar, gz, bz2, lz4, xz, zst; tar.gz, tar.bz2,...
Under the hood, the implementation:
- passes an additional parameter `compression` to `fsspec.open`, so that it performs the decompression on the fly: `fsspec.open(urlpath, compression=...)`
Some concerns have been raised about passing the parameter `compression` to `fsspec.open`:
- https://github.com/huggingface/datasets/pull/2786#discussion_r689550254
- #2811
The main argument is that if `open` decompresses the file and afterwards we call `gzip.open` on it, that will raise an error in `oscar` dataset:
```python
gzip.open(open(urlpath
```
While this is true:
- it is not natural/usual to call `open` inside `gzip.open` (never seen this before)
- indeed, this was recently (2 months ago) coded that way in `datasets` in order to allow streaming support (with previous implementation of streaming)
In this particular case, there is a natural fix solution: #2811:
- Revert the `open` inside the `gzip.open` (change done 2 months ago): `gzip.open(open(urlpath` => `gzip.open(urlpath`
- Patch `gzip.open(urlpath` with `fsspec.open(urlpath, compression="gzip"`
Are there other issues apart from this?
Note that there is an issue just because the open inside of the gzip.open. There is no issue in the other cases where datasets loading scripts use just
- `gzip.open`
- `open` (after having called dl_manager.download_and_extract)
TODO:
- [ ] Is this really an issue? Please enumerate the `datasets` loading scripts where this is problematic.
- For the moment, there are only 3 datasets where we have an `open` inside a `gzip.open`:
- oscar (since 23 June), mc4 (since 2 July) and c4 (since 2 July)
- In the 3 datasets, the only reason to put an open inside a gzip.open was indeed to force supporting streaming
- [ ] If this is indeed an issue, which are the possible alternatives? Pros/cons? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2813/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2812 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2812/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2812/comments | https://api.github.com/repos/huggingface/datasets/issues/2812/events | https://github.com/huggingface/datasets/issues/2812 | 972,936,889 | MDU6SXNzdWU5NzI5MzY4ODk= | 2,812 | arXiv Dataset verification problem | {
"login": "eladsegal",
"id": 13485709,
"node_id": "MDQ6VXNlcjEzNDg1NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eladsegal",
"html_url": "https://github.com/eladsegal",
"followers_url": "https://api.github.com/users/eladsegal/followers",
"following_url": "https://api.github.com/users/eladsegal/following{/other_user}",
"gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions",
"organizations_url": "https://api.github.com/users/eladsegal/orgs",
"repos_url": "https://api.github.com/users/eladsegal/repos",
"events_url": "https://api.github.com/users/eladsegal/events{/privacy}",
"received_events_url": "https://api.github.com/users/eladsegal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,629,223,308,000 | 1,629,223,308,000 | null | CONTRIBUTOR | null | null | null | ## Describe the bug
`dataset_infos.json` for `arxiv_dataset` contains a fixed number of training examples, however the data (downloaded from an external source) is updated every week with additional examples.
Therefore, loading the dataset without `ignore_verifications=True` results in a verification error. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2812/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2812/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2811 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2811/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2811/comments | https://api.github.com/repos/huggingface/datasets/issues/2811/events | https://github.com/huggingface/datasets/pull/2811 | 972,522,480 | MDExOlB1bGxSZXF1ZXN0NzE0MTAzNDIy | 2,811 | Fix stream oscar | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"One additional note: if we can try to not change the code of oscar.py too often, I'm sure users that have it in their cache directory will be happy to not have to redownload it every time they update the library ;)\r\n\r\n(since changing the code changes the cache directory of the dataset)",
"I don't think this is confusing for users because users don't even know we have patched `open`. The only thing users care is that if the pass `streaming=True`, they want to be able to load the dataset in streaming mode.\r\n\r\nI don't see any other dataset where patching `open` with `fsspec.open`+`compression` is an \"underlying issue\". Are there other datasets where this is an issue?\r\n\r\nThe only dataset where this was an issue is in oscar and the issue is indeed due to the additional `open` you added inside `zip.open`.",
"Closing this one since https://github.com/huggingface/datasets/pull/2822 reverted the change of behavior of `open`"
] | 1,629,195,059,000 | 1,629,973,575,000 | 1,629,973,574,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2811",
"html_url": "https://github.com/huggingface/datasets/pull/2811",
"diff_url": "https://github.com/huggingface/datasets/pull/2811.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2811.patch",
"merged_at": null
} | Previously, an additional `open` was added to oscar to make it stream-compatible: 587bbb94e891b22863b312b99696e32708c379f4.
This was argued that might be problematic: https://github.com/huggingface/datasets/pull/2786#discussion_r690045921
This PR:
- removes that additional `open`
- patches `gzip.open` with `xopen` + `compression="gzip"` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2811/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2810 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2810/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2810/comments | https://api.github.com/repos/huggingface/datasets/issues/2810/events | https://github.com/huggingface/datasets/pull/2810 | 972,040,022 | MDExOlB1bGxSZXF1ZXN0NzEzNjkzMTI1 | 2,810 | Add WIT Dataset | {
"login": "hassiahk",
"id": 13920778,
"node_id": "MDQ6VXNlcjEzOTIwNzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/13920778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hassiahk",
"html_url": "https://github.com/hassiahk",
"followers_url": "https://api.github.com/users/hassiahk/followers",
"following_url": "https://api.github.com/users/hassiahk/following{/other_user}",
"gists_url": "https://api.github.com/users/hassiahk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hassiahk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hassiahk/subscriptions",
"organizations_url": "https://api.github.com/users/hassiahk/orgs",
"repos_url": "https://api.github.com/users/hassiahk/repos",
"events_url": "https://api.github.com/users/hassiahk/events{/privacy}",
"received_events_url": "https://api.github.com/users/hassiahk/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,629,142,449,000 | 1,629,220,458,000 | null | NONE | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2810",
"html_url": "https://github.com/huggingface/datasets/pull/2810",
"diff_url": "https://github.com/huggingface/datasets/pull/2810.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2810.patch",
"merged_at": null
} | Adds Google's [WIT](https://github.com/google-research-datasets/wit) dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2810/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2809 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2809/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2809/comments | https://api.github.com/repos/huggingface/datasets/issues/2809/events | https://github.com/huggingface/datasets/pull/2809 | 971,902,613 | MDExOlB1bGxSZXF1ZXN0NzEzNTc2Njcz | 2,809 | Add Beans Dataset | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,629,130,953,000 | 1,629,978,147,000 | 1,629,978,147,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2809",
"html_url": "https://github.com/huggingface/datasets/pull/2809",
"diff_url": "https://github.com/huggingface/datasets/pull/2809.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2809.patch",
"merged_at": 1629978147000
} | Adds the [beans](https://github.com/AI-Lab-Makerere/ibean/) image classification dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2809/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2808 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2808/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2808/comments | https://api.github.com/repos/huggingface/datasets/issues/2808/events | https://github.com/huggingface/datasets/issues/2808 | 971,882,320 | MDU6SXNzdWU5NzE4ODIzMjA= | 2,808 | Enable streaming for Wikipedia corpora | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,629,129,552,000 | 1,629,129,552,000 | null | MEMBER | null | null | null | **Is your feature request related to a problem? Please describe.**
Several of the [Wikipedia corpora](https://huggingface.co/datasets?search=wiki) on the Hub involve quite large files that would be a good candidate for streaming. Currently it is not possible to stream these corpora:
```python
from datasets import load_dataset
# Throws ValueError: Builder wikipedia is not streamable.
wiki_dataset_streamed = load_dataset("wikipedia", "20200501.en", split="train", streaming=True)
```
Given that these corpora are derived from Wikipedia dumps in XML format which are then processed with Apache Beam, I am not sure whether streaming is possible in principle. The goal of this issue is to discuss whether this feature even makes sense :)
**Describe the solution you'd like**
It would be nice to be able to stream Wikipedia corpora from the Hub with something like
```python
from datasets import load_dataset
wiki_dataset_streamed = load_dataset("wikipedia", "20200501.en", split="train", streaming=True)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2808/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2808/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2807 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2807/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2807/comments | https://api.github.com/repos/huggingface/datasets/issues/2807/events | https://github.com/huggingface/datasets/pull/2807 | 971,849,863 | MDExOlB1bGxSZXF1ZXN0NzEzNTMxNjIw | 2,807 | Add cats_vs_dogs dataset | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,629,127,271,000 | 1,630,341,325,000 | 1,630,341,324,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2807",
"html_url": "https://github.com/huggingface/datasets/pull/2807",
"diff_url": "https://github.com/huggingface/datasets/pull/2807.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2807.patch",
"merged_at": 1630341324000
} | Adds Microsoft's [Cats vs. Dogs](https://www.microsoft.com/en-us/download/details.aspx?id=54765) dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2807/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2806 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2806/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2806/comments | https://api.github.com/repos/huggingface/datasets/issues/2806/events | https://github.com/huggingface/datasets/pull/2806 | 971,625,449 | MDExOlB1bGxSZXF1ZXN0NzEzMzM5NDUw | 2,806 | Fix streaming tar files from canonical datasets | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"In case it's relevant for this PR, I'm finding that I cannot stream the `bookcorpus` dataset (using the `master` branch of `datasets`), which is a `.tar.bz2` file:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nbooks_dataset_streamed = load_dataset(\"bookcorpus\", split=\"train\", streaming=True)\r\n# Throws a 404 HTTP error\r\nnext(iter(books_dataset_streamed))\r\n```\r\n\r\nThe full stack trace is:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nClientResponseError Traceback (most recent call last)\r\n<ipython-input-11-5ebbbe110b13> in <module>()\r\n----> 1 next(iter(books_dataset_streamed))\r\n\r\n11 frames\r\n/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py in __iter__(self)\r\n 339 \r\n 340 def __iter__(self):\r\n--> 341 for key, example in self._iter():\r\n 342 if self.features:\r\n 343 # we encode the example for ClassLabel feature types for example\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py in _iter(self)\r\n 336 else:\r\n 337 ex_iterable = self._ex_iterable\r\n--> 338 yield from ex_iterable\r\n 339 \r\n 340 def __iter__(self):\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py in __iter__(self)\r\n 76 \r\n 77 def __iter__(self):\r\n---> 78 for key, example in self.generate_examples_fn(**self.kwargs):\r\n 79 yield key, example\r\n 80 \r\n\r\n/root/.cache/huggingface/modules/datasets_modules/datasets/bookcorpus/44662c4a114441c35200992bea923b170e6f13f2f0beb7c14e43759cec498700/bookcorpus.py in _generate_examples(self, directory)\r\n 98 for txt_file in files:\r\n 99 with open(txt_file, mode=\"r\", encoding=\"utf-8\") as f:\r\n--> 100 for line in f:\r\n 101 yield _id, {\"text\": line.strip()}\r\n 102 _id += 1\r\n\r\n/usr/local/lib/python3.7/dist-packages/fsspec/implementations/http.py in read(self, length)\r\n 496 else:\r\n 497 length = min(self.size - self.loc, length)\r\n--> 498 return super().read(length)\r\n 499 \r\n 500 async def async_fetch_all(self):\r\n\r\n/usr/local/lib/python3.7/dist-packages/fsspec/spec.py in read(self, length)\r\n 1481 # don't even bother calling fetch\r\n 1482 return b\"\"\r\n-> 1483 out = self.cache._fetch(self.loc, self.loc + length)\r\n 1484 self.loc += len(out)\r\n 1485 return out\r\n\r\n/usr/local/lib/python3.7/dist-packages/fsspec/caching.py in _fetch(self, start, end)\r\n 374 ):\r\n 375 # First read, or extending both before and after\r\n--> 376 self.cache = self.fetcher(start, bend)\r\n 377 self.start = start\r\n 378 elif start < self.start:\r\n\r\n/usr/local/lib/python3.7/dist-packages/fsspec/asyn.py in wrapper(*args, **kwargs)\r\n 86 def wrapper(*args, **kwargs):\r\n 87 self = obj or args[0]\r\n---> 88 return sync(self.loop, func, *args, **kwargs)\r\n 89 \r\n 90 return wrapper\r\n\r\n/usr/local/lib/python3.7/dist-packages/fsspec/asyn.py in sync(loop, func, timeout, *args, **kwargs)\r\n 67 raise FSTimeoutError\r\n 68 if isinstance(result[0], BaseException):\r\n---> 69 raise result[0]\r\n 70 return result[0]\r\n 71 \r\n\r\n/usr/local/lib/python3.7/dist-packages/fsspec/asyn.py in _runner(event, coro, result, timeout)\r\n 23 coro = asyncio.wait_for(coro, timeout=timeout)\r\n 24 try:\r\n---> 25 result[0] = await coro\r\n 26 except Exception as ex:\r\n 27 result[0] = ex\r\n\r\n/usr/local/lib/python3.7/dist-packages/fsspec/implementations/http.py in async_fetch_range(self, start, end)\r\n 535 # range request outside file\r\n 536 return b\"\"\r\n--> 537 r.raise_for_status()\r\n 538 if r.status == 206:\r\n 539 # partial content, as expected\r\n\r\n/usr/local/lib/python3.7/dist-packages/aiohttp/client_reqrep.py in raise_for_status(self)\r\n 1003 status=self.status,\r\n 1004 message=self.reason,\r\n-> 1005 headers=self.headers,\r\n 1006 )\r\n 1007 \r\n\r\nClientResponseError: 404, message='Not Found', url=URL('https://storage.googleapis.com/huggingface-nlp/datasets/bookcorpus/bookcorpus.tar.bz2/books_large_p1.txt')\r\n```\r\n\r\nLet me know if this is unrelated and I'll open a separate issue :)\r\n\r\nEnvironment info:\r\n\r\n```\r\n- `datasets` version: 1.11.1.dev0\r\n- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.11\r\n- PyArrow version: 3.0.0\r\n```",
"@lewtun: `.tar.compression-extension` files are not supported yet. That is the objective of this PR.",
"> @lewtun: `.tar.compression-extension` files are not supported yet. That is the objective of this PR.\r\n\r\nthanks for the context and the great work on the streaming features (right now i'm writing the streaming section of the HF course, so am acting like a beta tester 😄)",
"@lewtun this PR fixes previous issue with xjoin:\r\n\r\nGiven:\r\n```python\r\nxjoin(\r\n \"https://storage.googleapis.com/huggingface-nlp/datasets/bookcorpus/bookcorpus.tar.bz2\",\r\n \"books_large_p1.txt\"\r\n)\r\n```\r\n\r\n- Before it gave: \r\n `\"https://storage.googleapis.com/huggingface-nlp/datasets/bookcorpus/bookcorpus.tar.bz2/books_large_p1.txt\"`\r\n thus raising the 404 error\r\n\r\n- Now it gives:\r\n `tar://books_large_p1.txt::https://storage.googleapis.com/huggingface-nlp/datasets/bookcorpus/bookcorpus.tar.bz2`\r\n (this is the expected format for `fsspec`) and additionally passes the parameter `compression=\"bz2\"`.\r\n See: https://github.com/huggingface/datasets/pull/2806/files#diff-97bb2d08db65ce3b679aefc43cadad76d053c1e58ecc315e49b80873d0fbdabeR15",
"closing in favor of #3066 "
] | 1,629,112,228,000 | 1,634,115,843,000 | 1,634,115,842,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2806",
"html_url": "https://github.com/huggingface/datasets/pull/2806",
"diff_url": "https://github.com/huggingface/datasets/pull/2806.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2806.patch",
"merged_at": null
} | Previous PR #2800 implemented support to stream remote tar files when passing the parameter `data_files`: they required a glob string `"*"`.
However, this glob string creates an error when streaming canonical datasets (with a `join` after the `open`).
This PR fixes this issue and allows streaming tar files both from:
- canonical datasets scripts and
- data files.
This PR also adds support for compressed tar files: `.tar.gz`, `.tar.bz2`,...
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2806/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2806/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2805 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2805/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2805/comments | https://api.github.com/repos/huggingface/datasets/issues/2805/events | https://github.com/huggingface/datasets/pull/2805 | 971,436,456 | MDExOlB1bGxSZXF1ZXN0NzEzMTc3MTI4 | 2,805 | Fix streaming zip files from canonical datasets | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,629,097,900,000 | 1,629,110,040,000 | 1,629,110,040,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2805",
"html_url": "https://github.com/huggingface/datasets/pull/2805",
"diff_url": "https://github.com/huggingface/datasets/pull/2805.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2805.patch",
"merged_at": 1629110040000
} | Previous PR #2798 fixed streaming remote zip files when passing the parameter `data_files`.
However, that broke streaming zip files used in canonical `datasets` scripts, which normally have a subsequent `join()` (patched with `xjoin()`) after the `StreamingDownloadManager.download_and_extract()` is called.
This PR fixes this issue and allows streaming zip files both from:
- canonical datasets scripts and
- data files. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2805/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2804 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2804/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2804/comments | https://api.github.com/repos/huggingface/datasets/issues/2804/events | https://github.com/huggingface/datasets/pull/2804 | 971,353,437 | MDExOlB1bGxSZXF1ZXN0NzEzMTA2NTMw | 2,804 | Add Food-101 | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,629,087,975,000 | 1,629,469,893,000 | 1,629,377,286,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2804",
"html_url": "https://github.com/huggingface/datasets/pull/2804",
"diff_url": "https://github.com/huggingface/datasets/pull/2804.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2804.patch",
"merged_at": 1629377286000
} | Adds image classification dataset [Food-101](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2804/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2803 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2803/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2803/comments | https://api.github.com/repos/huggingface/datasets/issues/2803/events | https://github.com/huggingface/datasets/pull/2803 | 970,858,928 | MDExOlB1bGxSZXF1ZXN0NzEyNzQxODMz | 2,803 | add stack exchange | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Merging this one since it's all good :)\r\n\r\nHowever I think it would also be better to actually rename it `the_pile_stack_exchange` to make things clearer and to avoid name collisions in the future. I would like to do the same for `books3` as well.\r\n\r\nIf you don't mind I'll open a PR to do the renaming",
"\r\n> If you don't mind I'll open a PR to do the renaming\r\n\r\n@lhoestq That will be nice !!\r\n"
] | 1,628,928,662,000 | 1,629,367,653,000 | 1,629,360,458,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2803",
"html_url": "https://github.com/huggingface/datasets/pull/2803",
"diff_url": "https://github.com/huggingface/datasets/pull/2803.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2803.patch",
"merged_at": 1629360458000
} | stack exchange is part of EleutherAI/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components.
I also change default `timeout` to 100 seconds instead of 10 seconds, otherwise I keep getting read time out when downloading source data of stack exchange and cc100 dataset.
When I was creating dataset card. I found there is room for creating / editing dataset card. I've made it an issue. #2797
Also I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2803/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2802 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2802/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2802/comments | https://api.github.com/repos/huggingface/datasets/issues/2802/events | https://github.com/huggingface/datasets/pull/2802 | 970,848,302 | MDExOlB1bGxSZXF1ZXN0NzEyNzM0MTc3 | 2,802 | add openwebtext2 | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"It seems we need to `pip install jsonlines` to pass the checks ?",
"Hi ! Do you really need `jsonlines` ? I think it simply uses `json.loads` under the hood.\r\n\r\nCurrently the test are failing because `jsonlines` is not part of the extra requirements `TESTS_REQUIRE` in setup.py\r\n\r\nSo either you can replace `jsonlines` with a simple for loop on the lines of the files and use `json.loads`, or you can add `TESTS_REQUIRE` to the test requirements (but in this case users will have to install it as well).",
"Thanks for your suggestion. I now know `io` and json lines format better and has changed `jsonlines` to just `readlines`."
] | 1,628,924,943,000 | 1,629,727,574,000 | 1,629,727,574,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2802",
"html_url": "https://github.com/huggingface/datasets/pull/2802",
"diff_url": "https://github.com/huggingface/datasets/pull/2802.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2802.patch",
"merged_at": 1629727574000
} | openwebtext2 is part of EleutherAI/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components.
When I was creating dataset card. I found there is room for creating / editing dataset card. I've made it an issue. #2797
Also I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2802/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2801 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2801/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2801/comments | https://api.github.com/repos/huggingface/datasets/issues/2801/events | https://github.com/huggingface/datasets/pull/2801 | 970,844,617 | MDExOlB1bGxSZXF1ZXN0NzEyNzMwODEz | 2,801 | add books3 | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> When I was creating dataset card. I found there is room for creating / editing dataset card. I've made it an issue. #2797\r\n\r\nThanks for the message, we'll definitely improve this\r\n\r\n> Also I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675\r\n\r\nWell currently no, but I think @lewtun was about to do it (though he's currently on vacations)",
"> > Also I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675\r\n> \r\n> Well currently no, but I think @lewtun was about to do it (though he's currently on vacations)\r\n\r\nyes i plan to start working on this next week #2185 \r\n\r\none question for @richarddwang - do you know if eleutherai happened to also release the \"existing\" datasets like enron emails and opensubtitles? \r\n\r\nin appendix c of their paper, they provide details on how they extracted these datasets, but it would be nice if we could just point to a url so we can be as close as possible to original implementation.",
"@lewtun \r\n\r\n> yes i plan to start working on this next week\r\n\r\nNice! Looking forward to it.\r\n\r\n> one question for @richarddwang - do you know if eleutherai happened to also release the \"existing\" datasets like enron emails and opensubtitles?\r\n\r\nSadly, I don't know any existing dataset of enron emails, but I believe opensubtitles dataset is hosted at here. https://the-eye.eu/public/AI/pile_preliminary_components/\r\n![image](https://user-images.githubusercontent.com/17963619/130061667-8c17985a-1c2f-432f-89f0-66a5288611b8.png)\r\n",
"thanks for the link @richarddwang! i think that corpus is actually the youtube subtitles one and my impression is that eleutherai have only uploaded the 14 new datasets they created. i've contacted one of the authors so hopefully they can share some additional info for us :)\r\n\r\nbtw it might take a while to put together all the corpora if i also need to preprocess them (e.g. the open subtitles / enron email etc), but i expect no longer than a few weeks."
] | 1,628,924,665,000 | 1,629,391,389,000 | 1,629,301,019,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2801",
"html_url": "https://github.com/huggingface/datasets/pull/2801",
"diff_url": "https://github.com/huggingface/datasets/pull/2801.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2801.patch",
"merged_at": 1629301019000
} | books3 is part of EleutherAI/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components.
When I was creating dataset card. I found there is room for creating / editing dataset card. I've made it an issue. #2797
Also I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2801/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2800 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2800/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2800/comments | https://api.github.com/repos/huggingface/datasets/issues/2800/events | https://github.com/huggingface/datasets/pull/2800 | 970,819,988 | MDExOlB1bGxSZXF1ZXN0NzEyNzExNTcx | 2,800 | Support streaming tar files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Why do we need the custom `readline` for exactly ? feel free to add a comment to say why it's needed"
] | 1,628,916,017,000 | 1,629,972,150,000 | 1,628,916,957,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2800",
"html_url": "https://github.com/huggingface/datasets/pull/2800",
"diff_url": "https://github.com/huggingface/datasets/pull/2800.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2800.patch",
"merged_at": 1628916957000
} | This PR adds support to stream tar files by using the `fsspec` tar protocol.
It also uses the custom `readline` implemented in PR #2786.
The corresponding test is implemented in PR #2786. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2800/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2800/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2799 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2799/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2799/comments | https://api.github.com/repos/huggingface/datasets/issues/2799/events | https://github.com/huggingface/datasets/issues/2799 | 970,507,351 | MDU6SXNzdWU5NzA1MDczNTE= | 2,799 | Loading JSON throws ArrowNotImplementedError | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nApparently, `pyarrow.json` tries to cast timestamp-like fields in your JSON file to pyarrow timestamp type, and it fails with `ArrowNotImplementedError`.\r\n\r\nI will investigate if there is a way to tell pyarrow not to try that timestamp casting.",
"I think the issue is more complex than that...\r\n\r\nI just took one of your JSON lines and pyarrow.json read it without problem.",
"> I just took one of your JSON lines an pyarrow.json read it without problem.\r\n\r\nyes, and for some peculiar reason the error is non-deterministic (i was eventually able to load the whole dataset by just re-running the `load_dataset` cell multiple times 🤔)\r\n\r\nthanks for looking into this 🙏 !",
"I think the error is generated by the `pyarrow.json.read()` option: `read_options=paj.ReadOptions(block_size=block_size)`...\r\ncc: @lhoestq ",
"The code works fine on my side.\r\nNot sure what's going on here :/\r\n\r\nI remember we did a few changes in the JSON loader in #2638 , did you do an update `datasets` when debugging this ?\r\n",
"OK after upgrading `datasets` to v1.12.1 the issue seems to have gone away. Closing this now :)",
"Oops, I spoke too soon 😓 \r\n\r\nAfter deleting the cache and trying the above code snippet again I am hitting the same error. You can also reproduce it in the Colab notebook I linked to in the issue description. ",
"@albertvillanova @lhoestq I noticed the same issue using datasets v1.12.1. Is there an update on when this could be fixed?",
"Apparently it's possible to make it work by increasing the `block_size`, let me open a PR",
"I just opened a PR with a fix, feel free to install `datasets` from source from source and let me know if it helps",
"@zijwang did PR #3000 solve the problem for you? It did for me, so it all is good on your end we can close this issue. Thanks again to @lhoestq for the pyarrow magic 🤯 "
] | 1,628,868,708,000 | 1,641,841,172,000 | 1,641,841,172,000 | MEMBER | null | null | null | ## Describe the bug
I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below).
Curiously, there is no problem loading the dataset with `pandas` which suggests some incorrect type inference is being made on the `datasets` side. For example, the stack trace indicates that some URL fields are being parsed as timestamps.
You can find a Colab notebook which reproduces the error [here](https://colab.research.google.com/drive/1YUCM0j1vx5ZrouQbYSzal6RwB4-Aoh4o?usp=sharing).
**Edit:** If one repeatedly tries to load the dataset, it _eventually_ works but I think it would still be good to understand why it fails in the first place :)
## Steps to reproduce the bug
```python
from datasets import load_dataset
from huggingface_hub import hf_hub_url
import pandas as pd
# returns https://huggingface.co/datasets/lewtun/github-issues-test/resolve/main/issues-datasets.jsonl
data_files = hf_hub_url(repo_id="lewtun/github-issues-test", filename="issues-datasets.jsonl", repo_type="dataset")
# throws ArrowNotImplementedError
dset = load_dataset("json", data_files=data_files, split="test")
# no problem with pandas ...
df = pd.read_json(data_files, orient="records", lines=True)
df.head()
```
## Expected results
I can load any line-separated JSON file, similar to `pandas`.
## Actual results
```
---------------------------------------------------------------------------
ArrowNotImplementedError Traceback (most recent call last)
<ipython-input-7-5b8e82b6c3a2> in <module>()
----> 1 dset = load_dataset("json", data_files=data_files, split="test")
9 frames
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowNotImplementedError: JSON conversion to struct<url: timestamp[s], html_url: timestamp[s], labels_url: timestamp[s], id: int64, node_id: timestamp[s], number: int64, title: timestamp[s], description: timestamp[s], creator: struct<login: timestamp[s], id: int64, node_id: timestamp[s], avatar_url: timestamp[s], gravatar_id: timestamp[s], url: timestamp[s], html_url: timestamp[s], followers_url: timestamp[s], following_url: timestamp[s], gists_url: timestamp[s], starred_url: timestamp[s], subscriptions_url: timestamp[s], organizations_url: timestamp[s], repos_url: timestamp[s], events_url: timestamp[s], received_events_url: timestamp[s], type: timestamp[s], site_admin: bool>, open_issues: int64, closed_issues: int64, state: timestamp[s], created_at: timestamp[s], updated_at: timestamp[s], due_on: timestamp[s], closed_at: timestamp[s]> is not supported
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.11
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2799/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2798 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2798/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2798/comments | https://api.github.com/repos/huggingface/datasets/issues/2798/events | https://github.com/huggingface/datasets/pull/2798 | 970,493,126 | MDExOlB1bGxSZXF1ZXN0NzEyNDM3ODc2 | 2,798 | Fix streaming zip files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! I don't fully understand this change @albertvillanova \r\nThe `_extract` method used to return the compound URL that points to the root of the inside of the archive.\r\nThis way users can use the usual os.path.join or other functions to point to the relevant files. I don't see why you're using a glob pattern ?",
"This change is to allow this:\r\n```python\r\ndata_files = f\"https://huggingface.co/datasets/albertvillanova/datasets-tests-compression/resolve/main/sample.zip\"\r\nds = load_dataset(\"json\", split=\"train\", data_files=data_files, streaming=True)\r\nassert isinstance(ds, IterableDataset)\r\n```\r\nNote that in this case the user will not call os.path.join.\r\n\r\nBefore this PR it gave error because pointing to the root, without any subsequent join, gives error:\r\n```python\r\nfsspec.open(\"zip://::https://huggingface.co/datasets/albertvillanova/datasets-tests-compression/resolve/main/sample.zip\")\r\n```"
] | 1,628,867,821,000 | 1,629,123,410,000 | 1,628,869,108,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2798",
"html_url": "https://github.com/huggingface/datasets/pull/2798",
"diff_url": "https://github.com/huggingface/datasets/pull/2798.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2798.patch",
"merged_at": 1628869108000
} | Currently, streaming remote zip data files gives `FileNotFoundError` message:
```python
data_files = f"https://huggingface.co/datasets/albertvillanova/datasets-tests-compression/resolve/main/sample.zip"
ds = load_dataset("json", split="train", data_files=data_files, streaming=True)
next(iter(ds))
```
This PR fixes it by adding a glob string.
The corresponding test is implemented in PR #2786. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2798/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2797 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2797/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2797/comments | https://api.github.com/repos/huggingface/datasets/issues/2797/events | https://github.com/huggingface/datasets/issues/2797 | 970,331,634 | MDU6SXNzdWU5NzAzMzE2MzQ= | 2,797 | Make creating/editing dataset cards easier, by editing on site and dumping info from test command. | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,628,855,689,000 | 1,628,930,529,000 | null | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
Creating and editing dataset cards should be but not that easy
- If other else know Some information I don't know (bias of dataset, dataset curation, supported dataset, ...), he/she should know the description on hf.co comes from README.md under github huggingface/datasets/datasets/the dataset, and willing to make a pr to add or fix information.
- Many information is also saved in `dataset_info.json` (citaion, description), but still need to write it down to README.md again.
- Contributor need to pip install and start a local server just for tagging the dataset's size. And contributor may be creating the dataset on lab's server, which can't open a browser.
- if any one proposes a new tag, it doesn't show in the list that another creator see. (a stackoverflow way may be ideal)
- dataset card generator web app doesn't generate the necessary subsecion `Contributions` for us.
**Describe the solution you'd like**
- Everyone (or at least the author/contributor) can edit the description, information, tags of the dataset, on hf.co website. Just like wikipedia+stackoverflow
- We can infer the actual data size, citation, data instance, ... from `dataset_info.json` and `dataset.arrow` via `dataset-cli test`
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2797/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2797/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2796 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2796/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2796/comments | https://api.github.com/repos/huggingface/datasets/issues/2796/events | https://github.com/huggingface/datasets/pull/2796 | 970,235,846 | MDExOlB1bGxSZXF1ZXN0NzEyMjE1ODM2 | 2,796 | add cedr dataset | {
"login": "naumov-al",
"id": 22640075,
"node_id": "MDQ6VXNlcjIyNjQwMDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/22640075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/naumov-al",
"html_url": "https://github.com/naumov-al",
"followers_url": "https://api.github.com/users/naumov-al/followers",
"following_url": "https://api.github.com/users/naumov-al/following{/other_user}",
"gists_url": "https://api.github.com/users/naumov-al/gists{/gist_id}",
"starred_url": "https://api.github.com/users/naumov-al/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/naumov-al/subscriptions",
"organizations_url": "https://api.github.com/users/naumov-al/orgs",
"repos_url": "https://api.github.com/users/naumov-al/repos",
"events_url": "https://api.github.com/users/naumov-al/events{/privacy}",
"received_events_url": "https://api.github.com/users/naumov-al/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> Hi ! Thanks a lot for adding this one :)\r\n> \r\n> Good job with the dataset card and the dataset script !\r\n> \r\n> I left a few suggestions\r\n\r\nThank you very much for your helpful suggestions. I have tried to carry them all out."
] | 1,628,847,455,000 | 1,630,080,096,000 | 1,630,080,096,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2796",
"html_url": "https://github.com/huggingface/datasets/pull/2796",
"diff_url": "https://github.com/huggingface/datasets/pull/2796.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2796.patch",
"merged_at": 1630080095000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2796/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2794 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2794/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2794/comments | https://api.github.com/repos/huggingface/datasets/issues/2794/events | https://github.com/huggingface/datasets/issues/2794 | 969,728,545 | MDU6SXNzdWU5Njk3Mjg1NDU= | 2,794 | Warnings and documentation about pickling incorrect | {
"login": "mbforbes",
"id": 1170062,
"node_id": "MDQ6VXNlcjExNzAwNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1170062?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mbforbes",
"html_url": "https://github.com/mbforbes",
"followers_url": "https://api.github.com/users/mbforbes/followers",
"following_url": "https://api.github.com/users/mbforbes/following{/other_user}",
"gists_url": "https://api.github.com/users/mbforbes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mbforbes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mbforbes/subscriptions",
"organizations_url": "https://api.github.com/users/mbforbes/orgs",
"repos_url": "https://api.github.com/users/mbforbes/repos",
"events_url": "https://api.github.com/users/mbforbes/events{/privacy}",
"received_events_url": "https://api.github.com/users/mbforbes/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,628,809,753,000 | 1,628,809,771,000 | null | NONE | null | null | null | ## Describe the bug
I have a docs bug and a closely related docs enhancement suggestion!
### Bug
The warning and documentation say "either `dill` or `pickle`" for fingerprinting. But it seems that `dill`, which is installed by `datasets` by default, _must_ work, or else the fingerprinting fails.
Warning:
https://github.com/huggingface/datasets/blob/450b9174765374111e5c6daab0ed294bc3d9b639/src/datasets/fingerprint.py#L262
Docs:
> For a transform to be hashable, it needs to be pickleable using dill or pickle.
> – [docs](https://huggingface.co/docs/datasets/processing.html#fingerprinting)
For my code, `pickle` works, but `dill` fails. The `dill` failure has already been reported in https://github.com/huggingface/datasets/issues/2643. However, the `dill` failure causes a hashing failure in the datasets library, without any backing off to `pickle`. This implies that it's not the case that either `dill` **or** `pickle` can work, but that `dill` must work if it is installed. I think this is more accurate wording, since it is installed and used by default:
https://github.com/huggingface/datasets/blob/c93525dc291346e54212567fa72d7d607befe937/setup.py#L83
... and the hashing will fail if it fails.
### Enhancement
I think it'd be very helpful to add to the documentation how to debug hashing failures. It took me a while to figure out how to diagnose this. There is a very nice two-liner by @lhoestq in https://github.com/huggingface/datasets/issues/2516#issuecomment-865173139:
```python
from datasets.fingerprint import Hasher
Hasher.hash(my_object)
```
I think add this to the docs will help future users quickly debug any hashing troubles of their own :-)
## Steps to reproduce the bug
`dill` but not `pickle` hashing failure in https://github.com/huggingface/datasets/issues/2643
## Expected results
If either `dill` or `pickle` can successfully hash, the hashing will succeed.
## Actual results
If `dill` or `pickle` cannot hash, the hashing fails.
## Environment info
- `datasets` version: 1.9.0
- Platform: Linux-5.8.0-1038-gcp-x86_64-with-glibc2.31
- Python version: 3.9.6
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2794/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2793 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2793/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2793/comments | https://api.github.com/repos/huggingface/datasets/issues/2793/events | https://github.com/huggingface/datasets/pull/2793 | 968,967,773 | MDExOlB1bGxSZXF1ZXN0NzExMDQ4NDY2 | 2,793 | Fix type hint for data_files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,628,779,357,000 | 1,628,782,529,000 | 1,628,782,529,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2793",
"html_url": "https://github.com/huggingface/datasets/pull/2793",
"diff_url": "https://github.com/huggingface/datasets/pull/2793.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2793.patch",
"merged_at": 1628782529000
} | Fix type hint for `data_files` in signatures and docstrings. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2793/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2792 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2792/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2792/comments | https://api.github.com/repos/huggingface/datasets/issues/2792/events | https://github.com/huggingface/datasets/pull/2792 | 968,650,274 | MDExOlB1bGxSZXF1ZXN0NzEwNzUyMjc0 | 2,792 | Update: GooAQ - add train/val/test splits | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 1,628,768,418,000 | 1,630,079,925,000 | 1,630,079,894,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2792",
"html_url": "https://github.com/huggingface/datasets/pull/2792",
"diff_url": "https://github.com/huggingface/datasets/pull/2792.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2792.patch",
"merged_at": 1630079894000
} | [GooAQ](https://github.com/allenai/gooaq) dataset was recently updated after splits were added for the same. This PR contains new updated GooAQ with train/val/test splits and updated README as well. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2792/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2792/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2791 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2791/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2791/comments | https://api.github.com/repos/huggingface/datasets/issues/2791/events | https://github.com/huggingface/datasets/pull/2791 | 968,360,314 | MDExOlB1bGxSZXF1ZXN0NzEwNDgxNDAy | 2,791 | Fix typo in cnn_dailymail | {
"login": "omaralsayed",
"id": 42531544,
"node_id": "MDQ6VXNlcjQyNTMxNTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/42531544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omaralsayed",
"html_url": "https://github.com/omaralsayed",
"followers_url": "https://api.github.com/users/omaralsayed/followers",
"following_url": "https://api.github.com/users/omaralsayed/following{/other_user}",
"gists_url": "https://api.github.com/users/omaralsayed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omaralsayed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omaralsayed/subscriptions",
"organizations_url": "https://api.github.com/users/omaralsayed/orgs",
"repos_url": "https://api.github.com/users/omaralsayed/repos",
"events_url": "https://api.github.com/users/omaralsayed/events{/privacy}",
"received_events_url": "https://api.github.com/users/omaralsayed/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,628,757,522,000 | 1,628,767,079,000 | 1,628,767,079,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2791",
"html_url": "https://github.com/huggingface/datasets/pull/2791",
"diff_url": "https://github.com/huggingface/datasets/pull/2791.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2791.patch",
"merged_at": 1628767079000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2791/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2790 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2790/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2790/comments | https://api.github.com/repos/huggingface/datasets/issues/2790/events | https://github.com/huggingface/datasets/pull/2790 | 967,772,181 | MDExOlB1bGxSZXF1ZXN0NzA5OTI3NjM2 | 2,790 | Fix typo in test_dataset_common | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,628,730,629,000 | 1,628,767,889,000 | 1,628,767,889,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2790",
"html_url": "https://github.com/huggingface/datasets/pull/2790",
"diff_url": "https://github.com/huggingface/datasets/pull/2790.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2790.patch",
"merged_at": 1628767889000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2790/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2789 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2789/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2789/comments | https://api.github.com/repos/huggingface/datasets/issues/2789/events | https://github.com/huggingface/datasets/pull/2789 | 967,361,934 | MDExOlB1bGxSZXF1ZXN0NzA5NTQwMzY5 | 2,789 | Updated dataset description of DaNE | {
"login": "KennethEnevoldsen",
"id": 23721977,
"node_id": "MDQ6VXNlcjIzNzIxOTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KennethEnevoldsen",
"html_url": "https://github.com/KennethEnevoldsen",
"followers_url": "https://api.github.com/users/KennethEnevoldsen/followers",
"following_url": "https://api.github.com/users/KennethEnevoldsen/following{/other_user}",
"gists_url": "https://api.github.com/users/KennethEnevoldsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KennethEnevoldsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KennethEnevoldsen/subscriptions",
"organizations_url": "https://api.github.com/users/KennethEnevoldsen/orgs",
"repos_url": "https://api.github.com/users/KennethEnevoldsen/repos",
"events_url": "https://api.github.com/users/KennethEnevoldsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/KennethEnevoldsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for finishing it @albertvillanova "
] | 1,628,711,928,000 | 1,628,784,659,000 | 1,628,784,361,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2789",
"html_url": "https://github.com/huggingface/datasets/pull/2789",
"diff_url": "https://github.com/huggingface/datasets/pull/2789.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2789.patch",
"merged_at": 1628784361000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2789/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2788 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2788/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2788/comments | https://api.github.com/repos/huggingface/datasets/issues/2788/events | https://github.com/huggingface/datasets/issues/2788 | 967,149,389 | MDU6SXNzdWU5NjcxNDkzODk= | 2,788 | How to sample every file in a list of files making up a split in a dataset when loading? | {
"login": "brijow",
"id": 11220949,
"node_id": "MDQ6VXNlcjExMjIwOTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11220949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brijow",
"html_url": "https://github.com/brijow",
"followers_url": "https://api.github.com/users/brijow/followers",
"following_url": "https://api.github.com/users/brijow/following{/other_user}",
"gists_url": "https://api.github.com/users/brijow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brijow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brijow/subscriptions",
"organizations_url": "https://api.github.com/users/brijow/orgs",
"repos_url": "https://api.github.com/users/brijow/repos",
"events_url": "https://api.github.com/users/brijow/events{/privacy}",
"received_events_url": "https://api.github.com/users/brijow/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! This is not possible just with `load_dataset`.\r\n\r\nYou can do something like this instead:\r\n```python\r\nseed=42\r\ndata_files_dict = {\r\n \"train\": [train_file1, train_file2],\r\n \"test\": [test_file1, test_file2],\r\n \"val\": [val_file1, val_file2]\r\n}\r\ndataset = datasets.load_dataset(\r\n \"csv\",\r\n data_files=data_files_dict,\r\n).shuffle(seed=seed)\r\n\r\nsample_dataset = {splitname: split.select(range(8)) for splitname, split in dataset.items()}\r\n```\r\n\r\nAnother alternative is loading each file separately with `split=\"train[:8]\"` and then use `concatenate_datasets` to merge the sample of each file."
] | 1,628,703,801,000 | 1,629,738,742,000 | null | NONE | null | null | null | I am loading a dataset with multiple train, test, and validation files like this:
```
data_files_dict = {
"train": [train_file1, train_file2],
"test": [test_file1, test_file2],
"val": [val_file1, val_file2]
}
dataset = datasets.load_dataset(
"csv",
data_files=data_files_dict,
split=['train[:8]', 'test[:8]', 'val[:8]']
)
```
However, this only selects the first 8 rows from train_file1, test_file1, val_file1, since they are the first files in the lists.
I'm trying to formulate a split argument that can sample from each file specified in my list of files that make up each split.
Is this type of splitting supported? If so, how can I do it? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2788/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2787 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2787/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2787/comments | https://api.github.com/repos/huggingface/datasets/issues/2787/events | https://github.com/huggingface/datasets/issues/2787 | 967,018,406 | MDU6SXNzdWU5NjcwMTg0MDY= | 2,787 | ConnectionError: Couldn't reach https://raw.githubusercontent.com | {
"login": "jinec",
"id": 39627475,
"node_id": "MDQ6VXNlcjM5NjI3NDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/39627475?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jinec",
"html_url": "https://github.com/jinec",
"followers_url": "https://api.github.com/users/jinec/followers",
"following_url": "https://api.github.com/users/jinec/following{/other_user}",
"gists_url": "https://api.github.com/users/jinec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jinec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jinec/subscriptions",
"organizations_url": "https://api.github.com/users/jinec/orgs",
"repos_url": "https://api.github.com/users/jinec/repos",
"events_url": "https://api.github.com/users/jinec/events{/privacy}",
"received_events_url": "https://api.github.com/users/jinec/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"the bug code locate in :\r\n if data_args.task_name is not None:\r\n # Downloading and loading a dataset from the hub.\r\n datasets = load_dataset(\"glue\", data_args.task_name, cache_dir=model_args.cache_dir)",
"Hi @jinec,\r\n\r\nFrom time to time we get this kind of `ConnectionError` coming from the github.com website: https://raw.githubusercontent.com\r\n\r\nNormally, it should work if you wait a little and then retry.\r\n\r\nCould you please confirm if the problem persists?",
"cannot connect,even by Web browser,please check that there is some problems。",
"I can access https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py without problem...",
"> I can access https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py without problem...\r\n\r\nI can not access https://raw.githubusercontent.com/huggingface/datasets either, I am in China",
"Finally i can access it, by the superfast software. Thanks",
"> Finally i can access it, by the superfast software. Thanks\r\n\r\nExcuse me, I have the same problem as you, could you please tell me how to solve it?"
] | 1,628,698,741,000 | 1,637,735,138,000 | 1,629,299,358,000 | NONE | null | null | null | Hello,
I am trying to run run_glue.py and it gives me this error -
Traceback (most recent call last):
File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module>
main()
File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 250, in main
datasets = load_dataset("glue", data_args.task_name, cache_dir=model_args.cache_dir)
File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\load.py", line 718, in load_dataset
use_auth_token=use_auth_token,
File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\load.py", line 320, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\utils\file_utils.py", line 291, in cached_path
use_auth_token=download_config.use_auth_token,
File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\utils\file_utils.py", line 623, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py
Trying to do python run_glue.py --model_name_or_path
bert-base-cased
--task_name
mrpc
--do_train
--do_eval
--max_seq_length
128
--per_device_train_batch_size
32
--learning_rate
2e-5
--num_train_epochs
3
--output_dir
./tmp/mrpc/
Is this something on my end? From what I can tell, this was re-fixeded by @fullyz a few months ago.
Thank you!
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2787/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2786 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2786/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2786/comments | https://api.github.com/repos/huggingface/datasets/issues/2786/events | https://github.com/huggingface/datasets/pull/2786 | 966,282,934 | MDExOlB1bGxSZXF1ZXN0NzA4NTQwMzU0 | 2,786 | Support streaming compressed files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,628,672,526,000 | 1,629,178,119,000 | 1,629,095,779,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2786",
"html_url": "https://github.com/huggingface/datasets/pull/2786",
"diff_url": "https://github.com/huggingface/datasets/pull/2786.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2786.patch",
"merged_at": 1629095779000
} | Add support to stream compressed files (current options in fsspec):
- bz2
- lz4
- xz
- zstd
cc: @lewtun | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2786/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2786/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2783 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2783/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2783/comments | https://api.github.com/repos/huggingface/datasets/issues/2783/events | https://github.com/huggingface/datasets/pull/2783 | 965,461,382 | MDExOlB1bGxSZXF1ZXN0NzA3NzcxOTM3 | 2,783 | Add KS task to SUPERB | {
"login": "anton-l",
"id": 26864830,
"node_id": "MDQ6VXNlcjI2ODY0ODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anton-l",
"html_url": "https://github.com/anton-l",
"followers_url": "https://api.github.com/users/anton-l/followers",
"following_url": "https://api.github.com/users/anton-l/following{/other_user}",
"gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anton-l/subscriptions",
"organizations_url": "https://api.github.com/users/anton-l/orgs",
"repos_url": "https://api.github.com/users/anton-l/repos",
"events_url": "https://api.github.com/users/anton-l/events{/privacy}",
"received_events_url": "https://api.github.com/users/anton-l/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"thanks a lot for implementing this @anton-l !!\r\n\r\ni won't have time to review this while i'm away, so happy for @albertvillanova and @patrickvonplaten to decide when to merge :)",
"@albertvillanova thanks! Everything should be ready now :)",
"> The _background_noise_/_silence_ audio files are much longer than others, so they require some sort of slicing for downstream training. I decided to leave the implementation of that up to the users, since TFDS and s3prl take different approaches (either slicing wavs deterministically, or subsampling randomly at runtime)\r\n\r\n@anton-l I was thinking that maybe we could give some hints in the dataset card (in a Usage section); something similar as for diarization: https://github.com/huggingface/datasets/blob/master/datasets/superb/README.md#example-of-usage\r\nNote that for diarization it is not yet finished: we have to test it and then provide an end-to-end example: https://github.com/huggingface/datasets/pull/2661/files#r680224909 ",
"@albertvillanova yeah, I'm not sure how to best implement it in pure `datasets` yet. It's something like this, where `sample_noise()` needs to be called from a pytorch batch collator or other framework-specific variant:\r\n\r\n```python\r\ndef map_to_array(example):\r\n import soundfile as sf\r\n\r\n speech_array, sample_rate = sf.read(example[\"file\"])\r\n example[\"speech\"] = speech_array\r\n example[\"sample_rate\"] = sample_rate\r\n return example\r\n\r\n\r\ndef sample_noise(example):\r\n # Use a version of this function in a stateless way to extract random 1 sec slices of background noise\r\n # on each epoch\r\n from random import randint\r\n\r\n # _silence_ audios are longer than 1 sec\r\n if example[\"label\"] == \"_silence_\":\r\n random_offset = randint(0, len(example[\"speech\"]) - example[\"sample_rate\"] - 1)\r\n example[\"speech\"] = example[\"speech\"][random_offset : random_offset + example[\"sample_rate\"]]\r\n\r\n return example\r\n```",
"I see... Yes, not trivial indeed. Maybe for the moment you could add those functions above to the README (as it is the case for now in diarization)? What do you think?"
] | 1,628,633,647,000 | 1,628,786,701,000 | 1,628,713,157,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2783",
"html_url": "https://github.com/huggingface/datasets/pull/2783",
"diff_url": "https://github.com/huggingface/datasets/pull/2783.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2783.patch",
"merged_at": 1628713157000
} | Add the KS (keyword spotting) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051).
- [s3prl instructions](https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/README.md#ks-keyword-spotting)
- [s3prl implementation](https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/speech_commands/dataset.py)
- [TFDS implementation](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/audio/speech_commands.py)
Some notable quirks:
- The dataset is originally single-archive (train+val+test all in one), but the test set has a "canonical" distribution in a separate archive, which is also used here (see `_split_ks_files()`).
- The `_background_noise_`/`_silence_` audio files are much longer than others, so they require some sort of slicing for downstream training. I decided to leave the implementation of that up to the users, since TFDS and s3prl take different approaches (either slicing wavs deterministically, or subsampling randomly at runtime)
Related to #2619. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2783/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2783/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2782 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2782/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2782/comments | https://api.github.com/repos/huggingface/datasets/issues/2782/events | https://github.com/huggingface/datasets/pull/2782 | 964,858,439 | MDExOlB1bGxSZXF1ZXN0NzA3MjQ5NDE5 | 2,782 | Fix renaming of corpus_bleu args | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,628,593,354,000 | 1,628,594,167,000 | 1,628,594,167,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2782",
"html_url": "https://github.com/huggingface/datasets/pull/2782",
"diff_url": "https://github.com/huggingface/datasets/pull/2782.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2782.patch",
"merged_at": 1628594167000
} | Last `sacrebleu` release (v2.0.0) has renamed `sacrebleu.corpus_bleu` args from `(sys_stream, ref_streams)` to `(hipotheses, references)`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15
This PR passes the args without parameter names, so that it is valid for all versions of `sacrebleu`.
This is a partial hotfix of #2781.
Close #2781. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2782/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2781 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2781/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2781/comments | https://api.github.com/repos/huggingface/datasets/issues/2781/events | https://github.com/huggingface/datasets/issues/2781 | 964,805,351 | MDU6SXNzdWU5NjQ4MDUzNTE= | 2,781 | Latest v2.0.0 release of sacrebleu has broken some metrics | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,628,589,581,000 | 1,628,594,167,000 | 1,628,594,167,000 | MEMBER | null | null | null | ## Describe the bug
After `sacrebleu` v2.0.0 release (see changes here: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15), some of `datasets` metrics are broken:
- Default tokenizer `sacrebleu.DEFAULT_TOKENIZER` no longer exists:
- #2739
- #2778
- Bleu tokenizers are no longer accessible with `sacrebleu.TOKENIZERS`:
- #2779
- `corpus_bleu` args have been renamed from `(sys_stream, ref_streams)` to `(hipotheses, references)`:
- #2782 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2781/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2780 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2780/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2780/comments | https://api.github.com/repos/huggingface/datasets/issues/2780/events | https://github.com/huggingface/datasets/pull/2780 | 964,794,764 | MDExOlB1bGxSZXF1ZXN0NzA3MTk2NjA3 | 2,780 | VIVOS dataset for Vietnamese ASR | {
"login": "binh234",
"id": 57580923,
"node_id": "MDQ6VXNlcjU3NTgwOTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/57580923?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/binh234",
"html_url": "https://github.com/binh234",
"followers_url": "https://api.github.com/users/binh234/followers",
"following_url": "https://api.github.com/users/binh234/following{/other_user}",
"gists_url": "https://api.github.com/users/binh234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/binh234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/binh234/subscriptions",
"organizations_url": "https://api.github.com/users/binh234/orgs",
"repos_url": "https://api.github.com/users/binh234/repos",
"events_url": "https://api.github.com/users/binh234/events{/privacy}",
"received_events_url": "https://api.github.com/users/binh234/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,628,588,856,000 | 1,628,766,570,000 | 1,628,766,570,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2780",
"html_url": "https://github.com/huggingface/datasets/pull/2780",
"diff_url": "https://github.com/huggingface/datasets/pull/2780.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2780.patch",
"merged_at": 1628766570000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2780/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2779 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2779/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2779/comments | https://api.github.com/repos/huggingface/datasets/issues/2779/events | https://github.com/huggingface/datasets/pull/2779 | 964,775,085 | MDExOlB1bGxSZXF1ZXN0NzA3MTgwNTgw | 2,779 | Fix sacrebleu tokenizers | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,628,587,467,000 | 1,628,593,388,000 | 1,628,593,074,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2779",
"html_url": "https://github.com/huggingface/datasets/pull/2779",
"diff_url": "https://github.com/huggingface/datasets/pull/2779.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2779.patch",
"merged_at": 1628593074000
} | Last `sacrebleu` release (v2.0.0) has removed `sacrebleu.TOKENIZERS`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15
This PR makes a hot fix of the bug by using a private function in `sacrebleu`: `sacrebleu.metrics.bleu._get_tokenizer()`.
Eventually, this should be further fixed in order to use only public functions.
This is a partial hotfix of #2781. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2779/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2778 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2778/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2778/comments | https://api.github.com/repos/huggingface/datasets/issues/2778/events | https://github.com/huggingface/datasets/pull/2778 | 964,737,422 | MDExOlB1bGxSZXF1ZXN0NzA3MTQ5MTk2 | 2,778 | Do not pass tokenize to sacrebleu | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,628,584,837,000 | 1,628,589,817,000 | 1,628,589,817,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2778",
"html_url": "https://github.com/huggingface/datasets/pull/2778",
"diff_url": "https://github.com/huggingface/datasets/pull/2778.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2778.patch",
"merged_at": 1628589817000
} | Last `sacrebleu` release (v2.0.0) has removed `sacrebleu.DEFAULT_TOKENIZER`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15
This PR does not pass `tokenize` to `sacrebleu` (note that the user cannot pass it anyway) and `sacrebleu` will use its default, no matter where it is and how it is called.
Related to #2739.
This is a partial hotfix of #2781. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2778/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2777 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2777/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2777/comments | https://api.github.com/repos/huggingface/datasets/issues/2777/events | https://github.com/huggingface/datasets/pull/2777 | 964,696,380 | MDExOlB1bGxSZXF1ZXN0NzA3MTEzNzg3 | 2,777 | Use packaging to handle versions | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,628,581,899,000 | 1,629,294,987,000 | 1,629,294,987,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2777",
"html_url": "https://github.com/huggingface/datasets/pull/2777",
"diff_url": "https://github.com/huggingface/datasets/pull/2777.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2777.patch",
"merged_at": 1629294987000
} | Use packaging module to handle/validate/check versions of Python packages.
Related to #2769. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2777/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2776 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2776/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2776/comments | https://api.github.com/repos/huggingface/datasets/issues/2776/events | https://github.com/huggingface/datasets/issues/2776 | 964,400,596 | MDU6SXNzdWU5NjQ0MDA1OTY= | 2,776 | document `config.HF_DATASETS_OFFLINE` and precedence | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,628,544,197,000 | 1,628,544,197,000 | null | CONTRIBUTOR | null | null | null | https://github.com/huggingface/datasets/pull/1976 implemented `HF_DATASETS_OFFLINE`, but:
1. `config.HF_DATASETS_OFFLINE` is not documented
2. the precedence is not documented (env, config)
I'm thinking it probably should be similar to what it says https://huggingface.co/docs/datasets/loading_datasets.html#from-the-huggingface-hub about `datasets.config.IN_MEMORY_MAX_SIZE`:
Quote:
> The default in 🤗 Datasets is to memory-map the dataset on disk unless you set datasets.config.IN_MEMORY_MAX_SIZE different from 0 bytes (default). In that case, the dataset will be copied in-memory if its size is smaller than datasets.config.IN_MEMORY_MAX_SIZE bytes, and memory-mapped otherwise. This behavior can be enabled by setting either the configuration option datasets.config.IN_MEMORY_MAX_SIZE (higher precedence) or the environment variable HF_DATASETS_IN_MEMORY_MAX_SIZE (lower precedence) to nonzero.
Context: trying to use `config.HF_DATASETS_OFFLINE` here:
https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/48
but are uncertain if it's safe, since it's not documented as a public API.
Thank you!
@lhoestq, @albertvillanova | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2776/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2775 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2775/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2775/comments | https://api.github.com/repos/huggingface/datasets/issues/2775/events | https://github.com/huggingface/datasets/issues/2775 | 964,303,626 | MDU6SXNzdWU5NjQzMDM2MjY= | 2,775 | `generate_random_fingerprint()` deterministic with 🤗Transformers' `set_seed()` | {
"login": "mbforbes",
"id": 1170062,
"node_id": "MDQ6VXNlcjExNzAwNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1170062?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mbforbes",
"html_url": "https://github.com/mbforbes",
"followers_url": "https://api.github.com/users/mbforbes/followers",
"following_url": "https://api.github.com/users/mbforbes/following{/other_user}",
"gists_url": "https://api.github.com/users/mbforbes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mbforbes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mbforbes/subscriptions",
"organizations_url": "https://api.github.com/users/mbforbes/orgs",
"repos_url": "https://api.github.com/users/mbforbes/repos",
"events_url": "https://api.github.com/users/mbforbes/events{/privacy}",
"received_events_url": "https://api.github.com/users/mbforbes/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"I dug into what I believe is the root of this issue and added a repro in my comment. If this is better addressed as a cross-team issue, let me know and I can open an issue in the Transformers repo",
"Hi !\r\n\r\nIMO we shouldn't try to modify `set_seed` from transformers but maybe make `datasets` have its own RNG just to generate random fingerprints.\r\n\r\nAny opinion on this @LysandreJik ?",
"Yes, this sounds good @lhoestq "
] | 1,628,537,331,000 | 1,629,966,654,000 | null | NONE | null | null | null | ## Describe the bug
**Update:** I dug into this to try to reproduce the underlying issue, and I believe it's that `set_seed()` from the `transformers` library makes the "random" fingerprint identical each time. I believe this is still a bug, because `datasets` is used exactly this way in `transformers` after `set_seed()` has been called, and I think that using `set_seed()` is a standard procedure to aid reproducibility. I've added more details to reproduce this below.
Hi there! I'm using my own local dataset and custom preprocessing function. My preprocessing function seems to be unpickle-able, perhaps because it is from a closure (will debug this separately). I get this warning, which is expected:
https://github.com/huggingface/datasets/blob/450b9174765374111e5c6daab0ed294bc3d9b639/src/datasets/fingerprint.py#L260-L265
However, what's not expected is that the `datasets` actually _does_ seem to cache and reuse this dataset between runs! After that line, the next thing that's logged looks like:
```text
Loading cached processed dataset at /home/xxx/.cache/huggingface/datasets/csv/default-xxx/0.0.0/xxx/cache-xxx.arrow
```
The path is exactly the same each run (e.g., last 26 runs).
This becomes a problem because I'll pass in the `--max_eval_samples` flag to the HuggingFace example script I'm running off of ([run_swag.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/multiple-choice/run_swag.py)). The fact that the cached dataset is reused means this flag gets ignored. I'll try to load 100 examples, and it will load the full cached 1,000,000.
I think that
https://github.com/huggingface/datasets/blob/450b9174765374111e5c6daab0ed294bc3d9b639/src/datasets/fingerprint.py#L248
... is actually consistent because randomness is being controlled in HuggingFace/Transformers for reproducibility. I've added a demo of this below.
## Steps to reproduce the bug
```python
# Contents of print_fingerprint.py
from transformers import set_seed
from datasets.fingerprint import generate_random_fingerprint
set_seed(42)
print(generate_random_fingerprint())
```
```bash
for i in {0..10}; do
python print_fingerprint.py
done
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
```
## Expected results
After the "random hash" warning is emitted, a random hash is generated, and no outdated cached datasets are reused.
## Actual results
After the "random hash" warning is emitted, an identical hash is generated each time, and an outdated cached dataset is reused each run.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.9.0
- Platform: Linux-5.8.0-1038-gcp-x86_64-with-glibc2.31
- Python version: 3.9.6
- PyArrow version: 4.0.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2775/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2774 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2774/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2774/comments | https://api.github.com/repos/huggingface/datasets/issues/2774/events | https://github.com/huggingface/datasets/pull/2774 | 963,932,199 | MDExOlB1bGxSZXF1ZXN0NzA2NDY2MDc0 | 2,774 | Prevent .map from using multiprocessing when loading from cache | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I'm guessing tests are failling, because this was pushed before https://github.com/huggingface/datasets/pull/2779 was merged? cc @albertvillanova ",
"Hi @thomasw21, yes you are right: those failing tests were fixed with #2779.\r\n\r\nWould you mind to merge current upstream master branch and push again?\r\n```\r\ngit checkout sequential_map_when_cached\r\ngit fetch upstream master\r\ngit merge upstream/master\r\ngit push -u origin sequential_map_when_cached\r\n```",
"Thanks for working on this ! I'm sure we can figure something out ;)\r\n\r\nCurrently `map` starts a process to apply the map function on each shard. If the shard has already been processed, then the process that has been spawned loads the processed shard from the cache and returns it.\r\n\r\nI think we should be able to simply not start a process if a shard is already processed and cached.\r\nThis way:\r\n- you won't need to specify `sequential=True`\r\n- it won't create new processes if the dataset is already processed and cached\r\n- it will properly reload each processed shard that is cached\r\n\r\nTo know if we have to start a new process for a shard you can use the function `update_fingerprint` from fingerprint.py to know the expected fingerprint of the processed shard.\r\nThen, if the shard has already been processed, there will be a cache file named `cached-<new_fingerprint>.arrow` and you can load it with\r\n```\r\nDataset.from_file(path_to_cache_file, info=self.info, split=self.split)\r\n```\r\n\r\nLet me know if that makes sense !",
"Yes that makes total sense, I tried to initially do that, except the way fingerprint is handled doesn't allow to easily manipulate such a field. Typically the fingerprinting is handled in `@fingerprint_transform` which has a bunch of params that aren't quite easy to extract. Those params are used to manipulate args, kwargs in fancy ways in order to finally obtain a dictionary used for fingerprint. I could duplicate everything, but this look like a very risky thing to do. I'll take a look if I can make something work with `inspect` if I can make a very simple wrapper.\r\n\r\nA much more simpler solution I think is adding an optional `shard: Optional[int] = None` parameter. If None, we use the number of proc as the number of shards, otherwise we pass down the expected number of shards and use either sequential/multiprocessing (with arbitrary number of workers) to load the shards? This would allow the weird case where one wants a large number of shards with a limited amount of processes. Not the smartest thing to do, but it's not an absurd behaviour. Would this be acceptable?",
"@lhoestq friendly ping as I feel it's up for review.",
"The CI error is unrelated to the changes of this PR - it looks like an SSL issue with conda"
] | 1,628,511,098,000 | 1,631,182,828,000 | 1,631,182,828,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2774",
"html_url": "https://github.com/huggingface/datasets/pull/2774",
"diff_url": "https://github.com/huggingface/datasets/pull/2774.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2774.patch",
"merged_at": 1631182828000
} | ## Context
On our setup, we use different setup to train vs proprocessing datasets. Usually we are able to obtain a high number of cpus to preprocess, which allows us to use `num_proc` however we can't use as many during training phase. Currently if we use `num_proc={whatever the preprocessing value was}` we load from cache, but we get:
```
Traceback (most recent call last):
File "lib/python3.8/site-packages/multiprocess/pool.py", line 131, in worker
put((job, i, result))
File "lib/python3.8/site-packages/multiprocess/queues.py", line 371, in put
self._writer.send_bytes(obj)
File "lib/python3.8/site-packages/multiprocess/connection.py", line 203, in send_bytes
self._send_bytes(m[offset:offset + size])
File "lib/python3.8/site-packages/multiprocess/connection.py", line 414, in _send_bytes
self._send(header + buf)
File "lib/python3.8/site-packages/multiprocess/connection.py", line 371, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
```
Our current guess, is that we're spawning too many processes compared to the number of cpus available, and it's running OOM. Also we're loading this in DDP setting which means that for each gpu, I need to spawn a high number of processes to match the preprocessing fingerprint.
Instead what we suggest:
- Allow loading shard sequentially, sharing the same fingerprint as the multiprocessed one, in order to leverage multiprocessing when we actually generate the cache, and remove it when loading from cache.
## Current issues
~I'm having a hard time making fingerprints match. For some reason, the multiprocessing and the sequential version generate two different hash.~
**EDIT**: Turns out multiprocessing and sequential have different `transform` value for fingerprinting (check `fingerprint_transform`) when running `_map_single`:
- sequential : `datasets.arrow_dataset.Dataset._map_single`
- multiprocessing: `datasets.arrow_dataset._map_single`
This discrepancy is caused by multiprocessing pickling the transformer function, it doesn't seem to keep the `Dataset` hierarchy. I'm still unclear on why `func.__qual_name__` isn't handled correctly in multiprocessing. But replacing `__qualname__` by `__name__` fixes the issue.
## What was done
~We try to prevent the usage of multiprocessing when loading a dataset. Instead we load all cached shards sequentially.~
I couldn't find a nice way to obtain the cached_file_name and check they all exist before deciding to use the multiprocessing flow or not. Instead I expose an optional boolean `sequential` in `map` method.
## TODO
- [x] Check that the multiprocessed version and the sequential version output the same output
- [x] Check that sequential can load multiprocessed
- [x] Check that multiprocessed can load sequential
## Test
```python
from datasets import load_dataset
from multiprocessing import Pool
import random
def process(batch, rng):
length = len(batch["text"])
return {**batch, "processed_text": [f"PROCESSED {rng.random()}" for _ in range(length)]}
dataset = load_dataset("stas/openwebtext-10k", split="train")
print(dataset.column_names)
print(type(dataset))
rng = random.Random(42)
dataset1 = dataset.map(process, batched=True, batch_size=50, num_proc=4, fn_kwargs={"rng": rng})
# This one should be loaded from cache
rng = random.Random(42)
dataset2 = dataset.map(process, batched=True, batch_size=50, num_proc=4, fn_kwargs={"rng": rng}, sequential=True)
# Just to check that the random generator was correct
print(dataset1[-1]["processed_text"])
print(dataset2[-1]["processed_text"])
```
## Other solutions
I chose to load everything sequentially, but we can probably find a way to load shards in parallel using another number of workers (essentially this would be an argument not used for fingerprinting, allowing to allow `m` shards using `n` processes, which would be very useful when same dataset have to be loaded on two different setup, and we still want to leverage cache).
Also we can use a env variable similarly to `TOKENIZERS_PARALLELISM` as this seems generally setup related (though this changes slightly if we use multiprocessing).
cc @lhoestq (since I had asked you previously on `num_proc` being used for fingerprinting). Don't know if this is acceptable. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2774/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2773 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2773/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2773/comments | https://api.github.com/repos/huggingface/datasets/issues/2773/events | https://github.com/huggingface/datasets/issues/2773 | 963,730,497 | MDU6SXNzdWU5NjM3MzA0OTc= | 2,773 | Remove dataset_infos.json | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | open | false | null | [] | null | [] | 1,628,494,999,000 | 1,628,494,999,000 | null | MEMBER | null | null | null | **Is your feature request related to a problem? Please describe.**
As discussed, there are infos in the `dataset_infos.json` which are redundant and we could have them only in the README file.
Others could be migrated to the README, like: "dataset_size", "size_in_bytes", "download_size", "splits.split_name.[num_bytes, num_examples]",...
However, there are others that do not seem too meaningful in the README, like the checksums.
**Describe the solution you'd like**
Open a discussion to decide what to do with the `dataset_infos.json` files: which information to be migrated and/or which information to be kept.
cc: @julien-c @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2773/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2772 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2772/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2772/comments | https://api.github.com/repos/huggingface/datasets/issues/2772/events | https://github.com/huggingface/datasets/issues/2772 | 963,348,834 | MDU6SXNzdWU5NjMzNDg4MzQ= | 2,772 | Remove returned feature constrain | {
"login": "PosoSAgapo",
"id": 33200481,
"node_id": "MDQ6VXNlcjMzMjAwNDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/33200481?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PosoSAgapo",
"html_url": "https://github.com/PosoSAgapo",
"followers_url": "https://api.github.com/users/PosoSAgapo/followers",
"following_url": "https://api.github.com/users/PosoSAgapo/following{/other_user}",
"gists_url": "https://api.github.com/users/PosoSAgapo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PosoSAgapo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PosoSAgapo/subscriptions",
"organizations_url": "https://api.github.com/users/PosoSAgapo/orgs",
"repos_url": "https://api.github.com/users/PosoSAgapo/repos",
"events_url": "https://api.github.com/users/PosoSAgapo/events{/privacy}",
"received_events_url": "https://api.github.com/users/PosoSAgapo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,628,395,290,000 | 1,628,412,481,000 | null | NONE | null | null | null | In the current version, the returned value of the map function has to be list or ndarray. However, this makes it unsuitable for many tasks. In NLP, many features are sparse like verb words, noun chunks, if we want to assign different values to different words, which will result in a large sparse matrix if we only score useful words like verb words.
Mostly, when using it on large scale, saving it as a whole takes a lot of disk storage and making it hard to read, the normal method is saving it in sparse form. However, the NumPy does not support sparse, therefore I have to use PyTorch or scipy to transform a matrix into special sparse form, which is not a form that can be transformed into list or ndarry. This violates the feature constraints of the map function.
I do appreciate the convenience of Datasets package, but I do not think the compulsory datatype constrain is necessary, in some cases, we just cannot transform it into a list or ndarray due to some reasons. Any way to fix this? Or what I can do to disable the compulsory datatype constrain?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2772/timeline | null | false |