url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.12B
| node_id
stringlengths 18
32
| number
int64 1
3.68k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1.59k
1,644B
| updated_at
int64 1.59k
1,694B
| closed_at
int64 1.59k
1,690B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 2
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/3275 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3275/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3275/comments | https://api.github.com/repos/huggingface/datasets/issues/3275/events | https://github.com/huggingface/datasets/pull/3275 | 1,053,698,898 | PR_kwDODunzps4uiN9t | 3,275 | Force data files extraction if download_mode='force_redownload' | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,984,824,000 | 1,636,987,523,000 | 1,636,987,523,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3275",
"html_url": "https://github.com/huggingface/datasets/pull/3275",
"diff_url": "https://github.com/huggingface/datasets/pull/3275.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3275.patch",
"merged_at": 1636987523000
} | Avoids weird issues when redownloading a dataset due to cached data not being fully updated.
With this change, issues #3122 and https://github.com/huggingface/datasets/issues/2956 (not a fix, but a workaround) can be fixed as follows:
```python
dset = load_dataset(..., download_mode="force_redownload")
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3275/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3275/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3274 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3274/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3274/comments | https://api.github.com/repos/huggingface/datasets/issues/3274/events | https://github.com/huggingface/datasets/pull/3274 | 1,053,689,140 | PR_kwDODunzps4uiL8- | 3,274 | Fix some contact information formats | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The CI fail are caused by some missing sections or tags, which is unrelated to this PR. Merging !"
] | 1,636,984,234,000 | 1,636,987,435,000 | 1,636,987,434,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3274",
"html_url": "https://github.com/huggingface/datasets/pull/3274",
"diff_url": "https://github.com/huggingface/datasets/pull/3274.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3274.patch",
"merged_at": 1636987434000
} | As reported in https://github.com/huggingface/datasets/issues/3188 some contact information are not displayed correctly.
This PR fixes this for CoNLL-2002 and some other datasets with the same issue | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3274/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3273 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3273/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3273/comments | https://api.github.com/repos/huggingface/datasets/issues/3273/events | https://github.com/huggingface/datasets/issues/3273 | 1,053,554,038 | I_kwDODunzps4-y_V2 | 3,273 | Respect row ordering when concatenating datasets along axis=1 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,636,975,634,000 | 1,637,163,671,000 | 1,637,163,671,000 | CONTRIBUTOR | null | null | null | Currently, there is a bug when concatenating datasets along `axis=1` if more than one dataset has the `_indices` attribute defined. In that scenario, all indices mappings except the first one get ignored.
A minimal reproducible example:
```python
>>> from datasets import Dataset, concatenate_datasets
>>> a = Dataset.from_dict({"a": [30, 20, 10]})
>>> b = Dataset.from_dict({"b": [2, 1, 3]})
>>> d = concatenate_datasets([a.sort("a"), b.sort("b")], axis=1)
>>> print(d[:3]) # expected: {'a': [10, 20, 30], 'b': [1, 2, 3]}
{'a': [10, 20, 30], 'b': [3, 1, 2]}
```
I've noticed the bug while working on #3195. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3273/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3272 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3272/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3272/comments | https://api.github.com/repos/huggingface/datasets/issues/3272/events | https://github.com/huggingface/datasets/issues/3272 | 1,053,516,479 | I_kwDODunzps4-y2K_ | 3,272 | Make iter_archive work with ZIP files | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "Mehdi2402",
"id": 56029953,
"node_id": "MDQ6VXNlcjU2MDI5OTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mehdi2402",
"html_url": "https://github.com/Mehdi2402",
"followers_url": "https://api.github.com/users/Mehdi2402/followers",
"following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions",
"organizations_url": "https://api.github.com/users/Mehdi2402/orgs",
"repos_url": "https://api.github.com/users/Mehdi2402/repos",
"events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mehdi2402/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Mehdi2402",
"id": 56029953,
"node_id": "MDQ6VXNlcjU2MDI5OTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mehdi2402",
"html_url": "https://github.com/Mehdi2402",
"followers_url": "https://api.github.com/users/Mehdi2402/followers",
"following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions",
"organizations_url": "https://api.github.com/users/Mehdi2402/orgs",
"repos_url": "https://api.github.com/users/Mehdi2402/repos",
"events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mehdi2402/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hello, is this issue open for any contributor ? can I work on it ?\r\n\r\n",
"Hi ! Sure this is open for any contributor. If you're interested feel free to self-assign this issue to you by commenting `#self-assign`. Then if you have any question or if I can help, feel free to ping me.\r\n\r\nTo begin with, feel free to take a look at both implementations of `iter_archive` for local downloads and for data streaming:\r\n\r\nIn the `DownloadManager` for local dowloads:\r\nhttps://github.com/huggingface/datasets/blob/dfa334bd8dc6cbc854b170379c7d2cb7e3d3fe4f/src/datasets/utils/download_manager.py#L218-L242\r\n\r\nIn the `StreamingDownloadManager` to stream the content of the archive directly from the remote file:\r\nhttps://github.com/huggingface/datasets/blob/dfa334bd8dc6cbc854b170379c7d2cb7e3d3fe4f/src/datasets/utils/streaming_download_manager.py#L502-L526\r\n\r\nNotice the call to `xopen` that opens and streams a file given either an URL or a local path :)",
"Okay thank you for the information. I will work on this :) ",
"#self-assign"
] | 1,636,973,442,000 | 1,637,798,927,000 | null | MEMBER | null | null | null | Currently users can use `dl_manager.iter_archive` in their dataset script to iterate over all the files of a TAR archive.
It would be nice if it could work with ZIP files too ! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3272/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3272/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3271 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3271/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3271/comments | https://api.github.com/repos/huggingface/datasets/issues/3271/events | https://github.com/huggingface/datasets/pull/3271 | 1,053,482,919 | PR_kwDODunzps4uhgi1 | 3,271 | Decode audio from remote | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,971,956,000 | 1,637,062,558,000 | 1,637,062,558,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3271",
"html_url": "https://github.com/huggingface/datasets/pull/3271",
"diff_url": "https://github.com/huggingface/datasets/pull/3271.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3271.patch",
"merged_at": 1637062558000
} | Currently the Audio feature type can only decode local audio files, not remote files.
To fix this I replaced `open` with our `xopen` functoin that is compatible with remote files in audio.py
cc @albertvillanova @mariosasko | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3271/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3270 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3270/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3270/comments | https://api.github.com/repos/huggingface/datasets/issues/3270/events | https://github.com/huggingface/datasets/pull/3270 | 1,053,465,662 | PR_kwDODunzps4uhcxm | 3,270 | Add os.listdir for streaming | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,971,244,000 | 1,636,972,023,000 | 1,636,972,023,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3270",
"html_url": "https://github.com/huggingface/datasets/pull/3270",
"diff_url": "https://github.com/huggingface/datasets/pull/3270.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3270.patch",
"merged_at": 1636972022000
} | Extend `os.listdir` to support streaming data from remote files. This is often used to navigate in remote ZIP files for example | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3270/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3270/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3269 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3269/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3269/comments | https://api.github.com/repos/huggingface/datasets/issues/3269/events | https://github.com/huggingface/datasets/issues/3269 | 1,053,218,769 | I_kwDODunzps4-xtfR | 3,269 | coqa NonMatchingChecksumError | {
"login": "ZhaofengWu",
"id": 11954789,
"node_id": "MDQ6VXNlcjExOTU0Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhaofengWu",
"html_url": "https://github.com/ZhaofengWu",
"followers_url": "https://api.github.com/users/ZhaofengWu/followers",
"following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions",
"organizations_url": "https://api.github.com/users/ZhaofengWu/orgs",
"repos_url": "https://api.github.com/users/ZhaofengWu/repos",
"events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhaofengWu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @ZhaofengWu, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce your bug:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"coqa\")\r\nDownloading: 3.82kB [00:00, 1.91MB/s]\r\nDownloading: 1.79kB [00:00, 1.79MB/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset coqa/default (download: 55.40 MiB, generated: 18.35 MiB, post-processed: Unknown size, total: 73.75 MiB) to .cache\\coqa\\default\\1.0.0\\553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0...\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 49.0M/49.0M [00:06<00:00, 7.17MB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9.09M/9.09M [00:01<00:00, 6.08MB/s]\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:12<00:00, 6.48s/it]\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 333.26it/s]\r\nDataset coqa downloaded and prepared to .cache\\coqa\\default\\1.0.0\\553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 285.49it/s]\r\n\r\nIn [3]: ds\r\nOut[3]:\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['source', 'story', 'questions', 'answers'],\r\n num_rows: 7199\r\n })\r\n validation: Dataset({\r\n features: ['source', 'story', 'questions', 'answers'],\r\n num_rows: 500\r\n })\r\n})\r\n```\r\n\r\nCould you please give more details about your development environment? You can run the command `datasets-cli env` and copy-and-paste its output:\r\n```\r\n- `datasets` version:\r\n- Platform:\r\n- Python version:\r\n- PyArrow version:\r\n```\r\nIt might be because you are using an old version of `datasets`. Could you please update it (`pip install -U datasets`) and confirm if the problem parsists? ",
"I'm getting the same error in two separate environments:\r\n```\r\n- `datasets` version: 1.15.1\r\n- Platform: Linux-5.4.0-84-generic-x86_64-with-debian-bullseye-sid\r\n- Python version: 3.7.11\r\n- PyArrow version: 6.0.0\r\n```\r\n\r\n```\r\n- `datasets` version: 1.15.1\r\n- Platform: macOS-10.16-x86_64-i386-64bit\r\n- Python version: 3.9.5\r\n- PyArrow version: 6.0.0\r\n```",
"I'm sorry, but don't get to reproduce the error in the Linux environment.\r\n\r\n@mariosasko @lhoestq can you reproduce it?",
"I also can't reproduce the error on Windows/Linux (tested both the master and the `1.15.1` version). ",
"Maybe the file had issues during the download ? Could you try to delete your cache and try again ?\r\nBy default the downloads cache is at `~/.cache/huggingface/datasets/downloads`\r\n\r\nAlso can you check if you have a proxy that could prevent the download to succeed ? Are you able to download those files via your browser ?",
"I got the same error in a third environment (google cloud) as well. The internet for these three environments are all different so I don't think that's the reason.\r\n```\r\n- `datasets` version: 1.12.1\r\n- Platform: Linux-5.11.0-1022-gcp-x86_64-with-glibc2.31\r\n- Python version: 3.9.7\r\n- PyArrow version: 6.0.0\r\n```\r\nI deleted the entire `~/.cache/huggingface/datasets` on my local mac, and got a different first time error.\r\n```\r\nPython 3.9.5 (default, May 18 2021, 12:31:01) \r\n[Clang 10.0.0 ] :: Anaconda, Inc. on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset(\"coqa\")\r\nDownloading: 3.82kB [00:00, 1.19MB/s] \r\nDownloading: 1.79kB [00:00, 712kB/s] \r\nUsing custom data configuration default\r\nDownloading and preparing dataset coqa/default (download: 55.40 MiB, generated: 18.35 MiB, post-processed: Unknown size, total: 73.75 MiB) to /Users/zhaofengw/.cache/huggingface/datasets/coqa/default/1.0.0/553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0...\r\nDownloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 222/222 [00:00<00:00, 1.36MB/s]\r\n 50%|████████████████████████████████████████████████████████████████████████████████████████████████████████████▌ | 1/2 [00:00<00:00, 2.47it/s]Traceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/load.py\", line 1632, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py\", line 607, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py\", line 675, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/Users/zhaofengw/.cache/huggingface/modules/datasets_modules/datasets/coqa/553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0/coqa.py\", line 70, in _split_generators\r\n downloaded_files = dl_manager.download_and_extract(urls_to_download)\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 284, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 196, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 216, in map_nested\r\n mapped = [\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 217, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True))\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 152, in _single_map_nested\r\n return function(data_struct)\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 217, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 295, in cached_path\r\n output_path = get_from_cache(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 594, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json\r\n>>> dataset = load_dataset(\"coqa\")\r\nUsing custom data configuration default\r\nDownloading and preparing dataset coqa/default (download: 55.40 MiB, generated: 18.35 MiB, post-processed: Unknown size, total: 73.75 MiB) to /Users/zhaofengw/.cache/huggingface/datasets/coqa/default/1.0.0/553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0...\r\nDownloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 222/222 [00:00<00:00, 1.38MB/s]\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 6.26it/s]\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 1087.45it/s]\r\n 50%|████████████████████████████████████████████████████████████████████████████████████████████████████████████▌ | 1/2 [00:45<00:45, 45.60s/it]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/load.py\", line 1632, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py\", line 607, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py\", line 679, in _download_and_prepare\r\n verify_checksums(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/info_utils.py\", line 40, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://nlp.stanford.edu/data/coqa/coqa-train-v1.0.json', 'https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json']\r\n```\r\nI can access the URL using my browser, though I did notice a redirection -- could that have something to do with it?",
"Hi @ZhaofengWu, \r\n\r\nWhat about in Google Colab? Can you run this notebook without errors? \r\nhttps://colab.research.google.com/drive/1CCpiiHmtNlfO_4CZ3-fW-TSShr1M0rL4?usp=sharing",
"I can run your notebook fine, but if I create one myself, it has that error: https://colab.research.google.com/drive/107GIdhrauPO6ZiFDY7G9S74in4qqI2Kx?usp=sharing.\r\n\r\nIt's so funny -- it's like whenever you guys run it it's fine but whenever I run it it fails, whatever the environment is.",
"I guess it must be some connection issue: the data owner may be blocking requests coming from your country or IP range...",
"I mean, I don't think google colab sends the connection from my IP. Same applies to google cloud.",
"Hello, I am having the same error with @ZhaofengWu first with \"social bias frames\" dataset. As I found this report, I tried also \"coqa\" and it fails as well. \r\n\r\nI test this on Google Colab. \r\n\r\n```\r\n- `datasets` version: 1.15.1\r\n- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.12\r\n- PyArrow version: 3.0.0\r\n```\r\n\r\nThen another environment\r\n\r\n```\r\n- `datasets` version: 1.15.1\r\n- Platform: macOS-12.0.1-arm64-arm-64bit\r\n- Python version: 3.9.7\r\n- PyArrow version: 6.0.1\r\n```\r\n\r\nI tried the notebook @albertvillanova provided earlier, and it fails...\r\n",
"Hi, still not able to reproduce the issue with `coqa`. If you still have this issue, could you please run these additional commands ?\r\n```python\r\n>>> import os\r\n>>> from hashlib import md5\r\n>>> from datasets.utils import DownloadManager, DownloadConfig\r\n>>> path = DownloadManager(download_config=DownloadConfig(use_etag=False)).download(\"https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json\") # it returns the cached file\r\n>>> os.path.getsize(path)\r\n9090845\r\n>>> m = md5()\r\n>>> m.update(open(path, \"rb\").read())\r\n>>> m.hexdigest()\r\n`95d427588e3733e4ebec55f6938dbba6`\r\n>>> open(path).read(500)\r\n'{\\n \"version\": \"1.0\",\\n \"data\": [\\n {\\n \"source\": \"mctest\",\\n \"id\": \"3dr23u6we5exclen4th8uq9rb42tel\",\\n \"filename\": \"mc160.test.41\",\\n \"story\": \"Once upon a time, in a barn near a farm house, there lived a little white kitten named Cotton. Cotton lived high up in a nice warm place above the barn where all of the farmer\\'s horses slept. But Cotton wasn\\'t alone in her little home above the barn, oh no. She shared her hay bed with her mommy and 5 other sisters. All of her sisters w'\r\n```\r\n\r\nThis way we can know whether you downloaded a corrupted file or an error file that could cause the `NonMatchingChecksumError` error to happen",
"```\r\n>>> import os\r\n>>> from hashlib import md5\r\n>>> from datasets.utils import DownloadManager, DownloadConfig\r\n>>> path = DownloadManager(download_config=DownloadConfig(use_etag=False)).download(\"https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json\") # it returns the cached file\r\n>>> os.path.getsize(path)\r\n222\r\n>>> m = md5()\r\n>>> m.update(open(path, \"rb\").read())\r\n>>> m.hexdigest()\r\n'1195812a37c01a4481a4748c85d0c6a9'\r\n>>> open(path).read(500)\r\n'<html>\\n<head><title>503 Service Temporarily Unavailable</title></head>\\n<body bgcolor=\"white\">\\n<center><h1>503 Service Temporarily Unavailable</h1></center>\\n<hr><center>nginx/1.10.3 (Ubuntu)</center>\\n</body>\\n</html>\\n'\r\n```\r\nLooks like there was a server-side error when downloading the dataset? But I don't believe this is a transient error given (a) deleting the cache and re-downloading gives the same error; (b) it happens on multiple platforms with different network configurations; (c) other people are getting this error too, see above. So I'm not sure why it works for some people but not others.",
"`wget https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json` does work. So I suspect there might be some problem in `datasets`' networking code? Can you give me some snippet that simulates how `datasets` requests the resource which I can run on my end?",
"There is a redirection -- I don't know if that's the cause.",
"Ok This is an issue with the server that hosts the data at `https://nlp.stanford.edu/nlp/data` that randomly returns 503 (by trying several times it also happens on my side), hopefully it can be fixed soon. I'll try to reach the people in charge of hosting the data",
"Thanks. Also it might help to display a more informative error message?",
"You're right. I just opened a PR that would show this error if it happens again:\r\n```python\r\nConnectionError: Couldn't reach https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json (error 503)\r\n```"
] | 1,636,952,647,000 | 1,642,600,699,000 | 1,642,600,699,000 | NONE | null | null | null | ```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s]
Downloading: 1.79kB [00:00, 733kB/s]
Using custom data configuration default
Downloading and preparing dataset coqa/default (download: 55.40 MiB, generated: 18.35 MiB, post-processed: Unknown size, total: 73.75 MiB) to /Users/zhaofengw/.cache/huggingface/datasets/coqa/default/1.0.0/553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0...
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 222/222 [00:00<00:00, 1.38MB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 222/222 [00:00<00:00, 1.32MB/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:01<00:00, 1.91it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 1117.44it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File "/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py", line 679, in _download_and_prepare
verify_checksums(
File "/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://nlp.stanford.edu/data/coqa/coqa-train-v1.0.json', 'https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json']
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3269/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3268 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3268/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3268/comments | https://api.github.com/repos/huggingface/datasets/issues/3268/events | https://github.com/huggingface/datasets/issues/3268 | 1,052,992,681 | I_kwDODunzps4-w2Sp | 3,268 | Dataset viewer issue for 'liweili/c4_200m' | {
"login": "liliwei25",
"id": 22389228,
"node_id": "MDQ6VXNlcjIyMzg5MjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/22389228?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liliwei25",
"html_url": "https://github.com/liliwei25",
"followers_url": "https://api.github.com/users/liliwei25/followers",
"following_url": "https://api.github.com/users/liliwei25/following{/other_user}",
"gists_url": "https://api.github.com/users/liliwei25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liliwei25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liliwei25/subscriptions",
"organizations_url": "https://api.github.com/users/liliwei25/orgs",
"repos_url": "https://api.github.com/users/liliwei25/repos",
"events_url": "https://api.github.com/users/liliwei25/events{/privacy}",
"received_events_url": "https://api.github.com/users/liliwei25/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi ! I think the issue comes from this [line](https://huggingface.co/datasets/liweili/c4_200m/blob/main/c4_200m.py#L87):\r\n```python\r\npath = filepath + \"/*.tsv*\"\r\n```\r\n\r\nYou can fix this by doing this instead:\r\n```python\r\npath = os.path.join(filepath, \"/*.tsv*\")\r\n```\r\n\r\nHere is why:\r\n\r\nLocally you can append `\"/*.tsv*\"` to your local path, however it doesn't work in streaming mode, and the dataset viewer does use the streaming mode.\r\nIn streaming mode, the download and extract part is done lazily. It means that instead of using local paths, it's still passing around URLs and [chained URLs](https://filesystem-spec.readthedocs.io/en/latest/features.html#url-chaining)\r\n\r\nTherefore in streaming mode, `filepath` is not a local path, but instead is equal to\r\n```python\r\nzip://::https://huggingface.co/datasets/liweili/c4_200m/resolve/main/data.zip\r\n```\r\nThe `zip://` part means that we navigate inside the remote ZIP file.\r\n\r\nYou must use `os.path.join` to navigate inside it and get your TSV files:\r\n```python\r\n>>> os.path.join(filepath, \"/*.tsv*\")\r\nzip://*.tsv*::https://huggingface.co/datasets/liweili/c4_200m/resolve/main/data.zip\r\n```\r\n\r\n`datasets` extends `os.path.join`, `glob.glob`, etc. in your dataset scripts to work with remote files.",
"hi @lhoestq ! thanks for the tip! i've updated the line of code but it's still not working. am i doing something else wrong? thank you!",
"Hi ! Your dataset code is all good now :)\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: d = load_dataset(\"liweili/c4_200m\", streaming=True)\r\nDownloading: 100%|█████████████████████████████████████████████| 2.79k/2.79k [00:00<00:00, 4.83MB/s]\r\nUsing custom data configuration default\r\n\r\nIn [3]: next(iter(d[\"train\"]))\r\nOut[3]: \r\n{'input': 'Bitcoin is for $7,094 this morning, which CoinDesk says.',\r\n 'output': 'Bitcoin goes for $7,094 this morning, according to CoinDesk.'}\r\n```\r\nThough the viewer doesn't seem to be updated, I'll take a look at what's wrong",
"thank you @lhoestq! 😄 ",
"It's working\r\n\r\n<img width=\"1424\" alt=\"Capture d’écran 2021-12-21 à 11 24 29\" src=\"https://user-images.githubusercontent.com/1676121/146914238-24bf87c0-c68d-4699-8d6c-fa3065656d1d.png\">\r\n\r\n"
] | 1,636,910,326,000 | 1,640,082,320,000 | 1,640,082,291,000 | NONE | null | null | null | ## Dataset viewer issue for '*liweili/c4_200m*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/liweili/c4_200m)*
*Server Error*
```
Status code: 404
Exception: Status404Error
Message: Not found. Maybe the cache is missing, or maybe the ressource does not exist.
```
Am I the one who added this dataset ? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3268/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3268/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3267 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3267/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3267/comments | https://api.github.com/repos/huggingface/datasets/issues/3267/events | https://github.com/huggingface/datasets/pull/3267 | 1,052,750,084 | PR_kwDODunzps4ufQzB | 3,267 | Replacing .format() and % by f-strings | {
"login": "Mehdi2402",
"id": 56029953,
"node_id": "MDQ6VXNlcjU2MDI5OTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mehdi2402",
"html_url": "https://github.com/Mehdi2402",
"followers_url": "https://api.github.com/users/Mehdi2402/followers",
"following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions",
"organizations_url": "https://api.github.com/users/Mehdi2402/orgs",
"repos_url": "https://api.github.com/users/Mehdi2402/repos",
"events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mehdi2402/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! It looks like most of your changes are just `black` changes. All those changes are not necessary. In particular if you want to use `black`, please use the `make style` command instead. It runs `black` with additional parameters and you shouldn't end up with that many changes\r\n\r\nFeel free to open a new PR that doesn't include all the unnecessary `black` changes that you have on your branch :)",
"> Hi ! It looks like most of your changes are just `black` changes. All those changes are not necessary. In particular if you want to use `black`, please use the `make style` command instead. It runs `black` with additional parameters and you shouldn't end up with that many changes\r\n> \r\n> Feel free to open a new PR that doesn't include all the unnecessary `black` changes that you have on your branch :)\r\n\r\nThank you for your answer :) , I will open a new PR with the correct changes.",
"Hi @lhoestq, I submitted 3 commits in a new PR (#3277) where I did not apply black.\r\n\r\nI can apply the ```make style``` command if asked.",
"Cool thanks ! Yes feel free to make sure you have `black==21.4b0` and run `make style`"
] | 1,636,830,722,000 | 1,637,096,426,000 | 1,637,074,543,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3267",
"html_url": "https://github.com/huggingface/datasets/pull/3267",
"diff_url": "https://github.com/huggingface/datasets/pull/3267.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3267.patch",
"merged_at": null
} | **Fix #3257**
Replaced _.format()_ and _%_ by f-strings in the following modules :
- [x] **tests**
- [x] **metrics**
- [x] **benchmarks**
- [x] **utils**
- [x] **templates**
Will follow in the next PR the modules left :
- [ ] **src**
Module **datasets** will not be edited as asked by @mariosasko
PS : black and isort applied to files
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3267/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3266 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3266/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3266/comments | https://api.github.com/repos/huggingface/datasets/issues/3266/events | https://github.com/huggingface/datasets/pull/3266 | 1,052,700,155 | PR_kwDODunzps4ufH94 | 3,266 | Fix URLs for WikiAuto Manual, jeopardy and definite_pronoun_resolution | {
"login": "LashaO",
"id": 28014149,
"node_id": "MDQ6VXNlcjI4MDE0MTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/28014149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LashaO",
"html_url": "https://github.com/LashaO",
"followers_url": "https://api.github.com/users/LashaO/followers",
"following_url": "https://api.github.com/users/LashaO/following{/other_user}",
"gists_url": "https://api.github.com/users/LashaO/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LashaO/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LashaO/subscriptions",
"organizations_url": "https://api.github.com/users/LashaO/orgs",
"repos_url": "https://api.github.com/users/LashaO/repos",
"events_url": "https://api.github.com/users/LashaO/events{/privacy}",
"received_events_url": "https://api.github.com/users/LashaO/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"There seems to be problems with datasets metadata, of which I dont have access to. I think one of the datasets is from reddit. Can anyone help?",
"Hello @LashaO , I think the errors were caused by `_DATA_FILES` in `definite_pronoun_resolution.py`. Here are details of the test error.\r\n```\r\nself = BuilderConfig(name='plain_text', version=1.0.0, data_dir=None, data_files={'train': 'train.c.txt', 'test': 'test.c.txt'}, description='Plain text import of the Definite Pronoun Resolution Dataset.')\r\n\r\n def __post_init__(self):\r\n # The config name is used to name the cache directory.\r\n invalid_windows_characters = r\"<>:/\\|?*\"\r\n for invalid_char in invalid_windows_characters:\r\n if invalid_char in self.name:\r\n raise InvalidConfigName(\r\n f\"Bad characters from black list '{invalid_windows_characters}' found in '{self.name}'. \"\r\n f\"They could create issues when creating a directory for this config on Windows filesystem.\"\r\n )\r\n if self.data_files is not None and not isinstance(self.data_files, DataFilesDict):\r\n> raise ValueError(f\"Expected a DataFilesDict in data_files but got {self.data_files}\")\r\nE ValueError: Expected a DataFilesDict in data_files but got {'train': 'train.c.txt', 'test': 'test.c.txt'}\r\n```",
"Hi ! Thanks for the fixes :)\r\n\r\nInstead of uploading the `definite_pronoun_resolution` data files in this PR, maybe we can just update the URL ?\r\nThe old url was http://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt, but now it's https://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt (https instead of http)",
"Actually the bad certificate creates an issue with the download\r\n```python\r\nimport datasets \r\ndatasets.DownloadManager().download(\"https://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt\")\r\n# raises: ConnectionError: Couldn't reach https://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt\r\n```\r\n\r\nLet me see if I can fix that",
"I uploaded them to these URLs, feel free to use them instead of having the text files here in the PR :)\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/definite_pronoun_resolution/train.c.txt\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/definite_pronoun_resolution/test.c.txt",
"Thank you for the tips! Having a busy week so anyone willing to commit the suggestions is welcome. Else, I will try to get back to this in a while.",
"@LashaO Thanks for working on this. Yes, I'll take over as we already have a request to fix the URL of the Jeopardy! dataset in a separate issue.",
"~~Still have to fix the error in the dummy data test of the WikiAuto dataset (so please don't merge).~~ Done! Ready for merging.",
"Thank you, Mario!",
"The CI failure is only related to missing tags in the dataset cards, merging :)"
] | 1,636,815,694,000 | 1,638,789,391,000 | 1,638,789,391,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3266",
"html_url": "https://github.com/huggingface/datasets/pull/3266",
"diff_url": "https://github.com/huggingface/datasets/pull/3266.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3266.patch",
"merged_at": 1638789391000
} | [#3264](https://github.com/huggingface/datasets/issues/3264) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3266/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3266/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3265 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3265/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3265/comments | https://api.github.com/repos/huggingface/datasets/issues/3265/events | https://github.com/huggingface/datasets/issues/3265 | 1,052,666,558 | I_kwDODunzps4-vmq- | 3,265 | Checksum error for kilt_task_wow | {
"login": "slyviacassell",
"id": 22296717,
"node_id": "MDQ6VXNlcjIyMjk2NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/22296717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slyviacassell",
"html_url": "https://github.com/slyviacassell",
"followers_url": "https://api.github.com/users/slyviacassell/followers",
"following_url": "https://api.github.com/users/slyviacassell/following{/other_user}",
"gists_url": "https://api.github.com/users/slyviacassell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slyviacassell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slyviacassell/subscriptions",
"organizations_url": "https://api.github.com/users/slyviacassell/orgs",
"repos_url": "https://api.github.com/users/slyviacassell/repos",
"events_url": "https://api.github.com/users/slyviacassell/events{/privacy}",
"received_events_url": "https://api.github.com/users/slyviacassell/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Using `dataset = load_dataset(\"kilt_tasks\", \"wow\", ignore_verifications=True)` may fix it, but I do not think it is a elegant solution.",
"Hi @slyviacassell, thanks for reporting.\r\n\r\nYes, there is an issue with the checksum verification. I'm fixing it.\r\n\r\nAnd as you pointed out, in the meantime, you can circumvent the problem by passing `ignore_verifications=True`. "
] | 1,636,805,057,000 | 1,637,061,833,000 | 1,637,061,718,000 | NONE | null | null | null | ## Describe the bug
Checksum failed when downloads kilt_tasks_wow. See error output for details.
## Steps to reproduce the bug
```python
import datasets
datasets.load_datasets('kilt_tasks','wow')
```
## Expected results
Download successful
## Actual results
```
Downloading and preparing dataset kilt_tasks/wow (download: 72.07 MiB, generated: 61.82 MiB, post-processed: Unknown size, total: 133.89 MiB) to /root/.cache/huggingface/datasets/kilt_tasks/wow/1.0.0/57dc8b2431e76637e0c6ef79689ca4af61ed3a330e2e0cd62c8971465a35db3a...
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 5121.25it/s]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1527.42it/s]
Traceback (most recent call last):
File "kilt_wow.py", line 30, in <module>
main()
File "kilt_wow.py", line 27, in main
train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/")
File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 79, in generate_k_shot_data
dataset = self.load_dataset()
File "kilt_wow.py", line 21, in load_dataset
return datasets.load_dataset('kilt_tasks','wow')
File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 679, in _download_and_prepare
verify_checksums(
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['http://dl.fbaipublicfiles.com/KILT/wow-train-kilt.jsonl', 'http://dl.fbaipublicfiles.com/KILT/wow-dev-kilt.jsonl']
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.15.1
- Platform: Linux-4.15.0-161-generic-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3265/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3265/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3264 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3264/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3264/comments | https://api.github.com/repos/huggingface/datasets/issues/3264/events | https://github.com/huggingface/datasets/issues/3264 | 1,052,663,513 | I_kwDODunzps4-vl7Z | 3,264 | Downloading URL change for WikiAuto Manual, jeopardy and definite_pronoun_resolution | {
"login": "slyviacassell",
"id": 22296717,
"node_id": "MDQ6VXNlcjIyMjk2NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/22296717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slyviacassell",
"html_url": "https://github.com/slyviacassell",
"followers_url": "https://api.github.com/users/slyviacassell/followers",
"following_url": "https://api.github.com/users/slyviacassell/following{/other_user}",
"gists_url": "https://api.github.com/users/slyviacassell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slyviacassell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slyviacassell/subscriptions",
"organizations_url": "https://api.github.com/users/slyviacassell/orgs",
"repos_url": "https://api.github.com/users/slyviacassell/repos",
"events_url": "https://api.github.com/users/slyviacassell/events{/privacy}",
"received_events_url": "https://api.github.com/users/slyviacassell/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"#take\r\nI am willing to fix this. Links can be replaced for WikiAuto Manual and jeopardy with new ones provided by authors.\r\n\r\nAs for the definite_pronoun_resolution URL, a certificate error seems to be preventing a download. I have the files on my local machine. I can include them in the dataset folder as the files are <1MB in size total.",
"> #take I am willing to fix this. Links can be replaced for WikiAuto Manual and jeopardy.\r\n> \r\n> As for the definite_pronoun_resolution URL, a certificate error seems to be preventing a download. I have the files on my local machine. Anyone has opinions on whether it is preferable for me to host them somewhere (e.g. personal GDrive account) or upload them to the dataset folder directly and use github raw URLs? The files are <1MB in size.\r\n\r\nI am planning to fix it next few days. But my to-do list is full and I do not have the cache of definite_pronoun_resolution. I am glad that you can take this. Thanks a lot!",
"No problem, buddy! Will submit a PR over this weekend."
] | 1,636,804,032,000 | 1,654,105,096,000 | 1,654,105,096,000 | NONE | null | null | null | ## Describe the bug
- WikiAuto Manual
The original manual datasets with the following downloading URL in this [repository](https://github.com/chaojiang06/wiki-auto) was [deleted](https://github.com/chaojiang06/wiki-auto/commit/0af9b066f2b4e02726fb8a9be49283c0ad25367f) by the author.
```
https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/train.tsv
```
- jeopardy
The downloading URL for jeopardy may move from
```
http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz
```
to
```
https://drive.google.com/file/d/0BwT5wj_P7BKXb2hfM3d2RHU1ckE/view?resourcekey=0-1abK4cJq-mqxFoSg86ieIg
```
- definite_pronoun_resolution
The following downloading URL for definite_pronoun_resolution cannot be reached for some reasons.
```
http://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt
```
## Steps to reproduce the bug
```python
import datasets
datasets.load_datasets('wiki_auto','manual')
datasets.load_datasets('jeopardy')
datasets.load_datasets('definite_pronoun_resolution')
```
## Expected results
Download successfully
## Actual results
- WikiAuto Manual
```
Downloading and preparing dataset wiki_auto/manual (download: 151.65 MiB, generated: 155.97 MiB, post-processed: Unknown size, total: 307.61 MiB) to /root/.cache/huggingface/datasets/wiki_auto/manual/1.0.0/5ffdd9fc62422d29bd02675fb9606f77c1251ee17169ac10b143ce07ef2f4db8...
0%| | 0/3 [00:00<?, ?it/s]Traceback (most recent call last):
File "wiki_auto.py", line 43, in <module>
main()
File "wiki_auto.py", line 40, in main
train, dev, test = dataset.generate_k_shot_data(k=16, seed=seed, path="../data/")
File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 24, in generate_k_shot_data
dataset = self.load_dataset()
File "wiki_auto.py", line 34, in load_dataset
return datasets.load_dataset('wiki_auto', 'manual')
File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/root/.cache/huggingface/modules/datasets_modules/datasets/wiki_auto/5ffdd9fc62422d29bd02675fb9606f77c1251ee17169ac10b143ce07ef2f4db8/wiki_auto.py", line 193, in _split_generators
data_dir = dl_manager.download_and_extract(my_urls)
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download
downloaded_path_or_paths = map_nested(
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 216, in map_nested
mapped = [
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 217, in <listcomp>
_single_map_nested((function, obj, types, None, True))
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 152, in _single_map_nested
return function(data_struct)
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path
output_path = get_from_cache(
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 592, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/train.tsv
```
- jeopardy
```
Using custom data configuration default
Downloading and preparing dataset jeopardy/default (download: 12.13 MiB, generated: 34.46 MiB, post-processed: Unknown size, total: 46.59 MiB) to /root/.cache/huggingface/datasets/jeopardy/default/0.1.0/25ee3e4a73755e637b8810f6493fd36e4523dea3ca8a540529d0a6e24c7f9810...
Traceback (most recent call last):
File "jeopardy.py", line 45, in <module>
main()
File "jeopardy.py", line 42, in main
train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/")
File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 79, in generate_k_shot_data
dataset = self.load_dataset()
File "jeopardy.py", line 36, in load_dataset
return datasets.load_dataset("jeopardy")
File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/root/.cache/huggingface/modules/datasets_modules/datasets/jeopardy/25ee3e4a73755e637b8810f6493fd36e4523dea3ca8a540529d0a6e24c7f9810/jeopardy.py", line 72, in _split_generators
filepath = dl_manager.download_and_extract(_DATA_URL)
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download
downloaded_path_or_paths = map_nested(
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 206, in map_nested
return function(data_struct)
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path
output_path = get_from_cache(
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz
```
- definite_pronoun_resolution
```
Downloading and preparing dataset definite_pronoun_resolution/plain_text (download: 222.12 KiB, generated: 239.12 KiB, post-processed: Unknown size, total: 461.24 KiB) to /root/.cache/huggingface/datasets/definite_pronoun_resolution/plain_text/1.0.0/35a1dfd4fba4afb8ba226cbbb65ac7cef0dd3cf9302d8f803740f05d2f16ceff...
0%| | 0/2 [00:00<?, ?it/s]Traceback (most recent call last):
File "definite_pronoun_resolution.py", line 37, in <module>
main()
File "definite_pronoun_resolution.py", line 34, in main
train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/")
File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 79, in generate_k_shot_data
dataset = self.load_dataset()
File "definite_pronoun_resolution.py", line 28, in load_dataset
return datasets.load_dataset('definite_pronoun_resolution')
File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/root/.cache/huggingface/modules/datasets_modules/datasets/definite_pronoun_resolution/35a1dfd4fba4afb8ba226cbbb65ac7cef0dd3cf9302d8f803740f05d2f16ceff/definite_pronoun_resolution.py", line 76, in _split_generators
files = dl_manager.download_and_extract(
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download
downloaded_path_or_paths = map_nested(
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 216, in map_nested
mapped = [
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 217, in <listcomp>
_single_map_nested((function, obj, types, None, True))
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 152, in _single_map_nested
return function(data_struct)
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path
output_path = get_from_cache(
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach http://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.15.1
- Platform: Linux-4.15.0-161-generic-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3264/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3264/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3263 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3263/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3263/comments | https://api.github.com/repos/huggingface/datasets/issues/3263/events | https://github.com/huggingface/datasets/issues/3263 | 1,052,552,516 | I_kwDODunzps4-vK1E | 3,263 | FET DATA | {
"login": "FStell01",
"id": 90987031,
"node_id": "MDQ6VXNlcjkwOTg3MDMx",
"avatar_url": "https://avatars.githubusercontent.com/u/90987031?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FStell01",
"html_url": "https://github.com/FStell01",
"followers_url": "https://api.github.com/users/FStell01/followers",
"following_url": "https://api.github.com/users/FStell01/following{/other_user}",
"gists_url": "https://api.github.com/users/FStell01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FStell01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FStell01/subscriptions",
"organizations_url": "https://api.github.com/users/FStell01/orgs",
"repos_url": "https://api.github.com/users/FStell01/repos",
"events_url": "https://api.github.com/users/FStell01/events{/privacy}",
"received_events_url": "https://api.github.com/users/FStell01/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [] | 1,636,782,366,000 | 1,636,810,307,000 | 1,636,810,307,000 | NONE | null | null | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3263/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3263/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3262 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3262/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3262/comments | https://api.github.com/repos/huggingface/datasets/issues/3262/events | https://github.com/huggingface/datasets/pull/3262 | 1,052,455,082 | PR_kwDODunzps4uej4t | 3,262 | asserts replaced with exception for image classification task, csv, json | {
"login": "manisnesan",
"id": 153142,
"node_id": "MDQ6VXNlcjE1MzE0Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/153142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manisnesan",
"html_url": "https://github.com/manisnesan",
"followers_url": "https://api.github.com/users/manisnesan/followers",
"following_url": "https://api.github.com/users/manisnesan/following{/other_user}",
"gists_url": "https://api.github.com/users/manisnesan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manisnesan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manisnesan/subscriptions",
"organizations_url": "https://api.github.com/users/manisnesan/orgs",
"repos_url": "https://api.github.com/users/manisnesan/repos",
"events_url": "https://api.github.com/users/manisnesan/events{/privacy}",
"received_events_url": "https://api.github.com/users/manisnesan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,756,499,000 | 1,636,974,517,000 | 1,636,974,517,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3262",
"html_url": "https://github.com/huggingface/datasets/pull/3262",
"diff_url": "https://github.com/huggingface/datasets/pull/3262.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3262.patch",
"merged_at": 1636974517000
} | Fixes for csv, json in io module and image_classification task with tests referenced in https://github.com/huggingface/datasets/issues/3171 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3262/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3261 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3261/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3261/comments | https://api.github.com/repos/huggingface/datasets/issues/3261/events | https://github.com/huggingface/datasets/issues/3261 | 1,052,346,381 | I_kwDODunzps4-uYgN | 3,261 | Scifi_TV_Shows: Having trouble getting viewer to find appropriate files | {
"login": "lara-martin",
"id": 37913218,
"node_id": "MDQ6VXNlcjM3OTEzMjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/37913218?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lara-martin",
"html_url": "https://github.com/lara-martin",
"followers_url": "https://api.github.com/users/lara-martin/followers",
"following_url": "https://api.github.com/users/lara-martin/following{/other_user}",
"gists_url": "https://api.github.com/users/lara-martin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lara-martin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lara-martin/subscriptions",
"organizations_url": "https://api.github.com/users/lara-martin/orgs",
"repos_url": "https://api.github.com/users/lara-martin/repos",
"events_url": "https://api.github.com/users/lara-martin/events{/privacy}",
"received_events_url": "https://api.github.com/users/lara-martin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"Hi ! I think this is because `iter_archive` doesn't support ZIP files yet. See https://github.com/huggingface/datasets/issues/3272\r\n\r\nYou can navigate into the archive this way instead:\r\n```python\r\n# in split_generators\r\ndata_dir = dl_manager.download_and_extract(url)\r\ntrain_filepath = os.path.join(data_dir, \"all-sci-fi-data-train.txt\")\r\nreturn [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN,\r\n gen_kwargs={\r\n \"filepath\": train_filepath,\r\n },\r\n ),\r\n...\r\n])\r\n\r\n# in generate_examples\r\nwith open(filepath, encoding=\"utf-8\") as f:\r\n ...\r\n```",
"It's working: https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/viewer/Scifi_TV_Shows/test\r\n\r\n<img width=\"1494\" alt=\"Capture d’écran 2021-12-21 à 11 23 51\" src=\"https://user-images.githubusercontent.com/1676121/146914068-f4b7225f-42c5-471d-9c73-2adac722162f.png\">\r\n"
] | 1,636,745,119,000 | 1,640,082,250,000 | 1,640,082,250,000 | NONE | null | null | null | ## Dataset viewer issue for '*Science Fiction TV Show Plots Corpus (Scifi_TV_Shows)*'
**Link:** [link](https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows)
I tried adding both a script (https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/blob/main/Scifi_TV_Shows.py) and some dummy examples (https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/tree/main/dummy), but the viewer still has a 404 error ("Not found. Maybe the cache is missing, or maybe the ressource does not exist."). I'm not sure what to try next. Thanks in advance!
Am I the one who added this dataset? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3261/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3260 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3260/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3260/comments | https://api.github.com/repos/huggingface/datasets/issues/3260/events | https://github.com/huggingface/datasets/pull/3260 | 1,052,247,373 | PR_kwDODunzps4ueCIU | 3,260 | Fix ConnectionError in Scielo dataset | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The CI error is unrelated to the change."
] | 1,636,740,157,000 | 1,637,086,697,000 | 1,637,085,322,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3260",
"html_url": "https://github.com/huggingface/datasets/pull/3260",
"diff_url": "https://github.com/huggingface/datasets/pull/3260.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3260.patch",
"merged_at": 1637085322000
} | This PR:
* allows 403 status code in HEAD requests to S3 buckets to fix the connection error in the Scielo dataset (instead of `url`, uses `response.url` to check the URL of the final endpoint)
* makes the Scielo dataset streamable
Fixes #3255. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3260/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3259 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3259/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3259/comments | https://api.github.com/repos/huggingface/datasets/issues/3259/events | https://github.com/huggingface/datasets/pull/3259 | 1,052,189,775 | PR_kwDODunzps4ud5W3 | 3,259 | Updating details of IRC disentanglement data | {
"login": "jkkummerfeld",
"id": 1298052,
"node_id": "MDQ6VXNlcjEyOTgwNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1298052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jkkummerfeld",
"html_url": "https://github.com/jkkummerfeld",
"followers_url": "https://api.github.com/users/jkkummerfeld/followers",
"following_url": "https://api.github.com/users/jkkummerfeld/following{/other_user}",
"gists_url": "https://api.github.com/users/jkkummerfeld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jkkummerfeld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jkkummerfeld/subscriptions",
"organizations_url": "https://api.github.com/users/jkkummerfeld/orgs",
"repos_url": "https://api.github.com/users/jkkummerfeld/repos",
"events_url": "https://api.github.com/users/jkkummerfeld/events{/privacy}",
"received_events_url": "https://api.github.com/users/jkkummerfeld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thank you for the cleanup!"
] | 1,636,737,418,000 | 1,637,255,973,000 | 1,637,255,973,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3259",
"html_url": "https://github.com/huggingface/datasets/pull/3259",
"diff_url": "https://github.com/huggingface/datasets/pull/3259.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3259.patch",
"merged_at": 1637255973000
} | I was pleasantly surprised to find that someone had already added my dataset to the huggingface library, but some details were missing or incorrect. This PR fixes the documentation. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3259/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3259/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3258 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3258/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3258/comments | https://api.github.com/repos/huggingface/datasets/issues/3258/events | https://github.com/huggingface/datasets/issues/3258 | 1,052,188,195 | I_kwDODunzps4-tx4j | 3,258 | Reload dataset that was already downloaded with `load_from_disk` from cloud storage | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,636,737,299,000 | 1,636,737,299,000 | null | MEMBER | null | null | null | `load_from_disk` downloads the dataset to a temporary directory without checking if the dataset has already been downloaded once.
It would be nice to have some sort of caching for datasets downloaded this way. This could leverage the fingerprint of the dataset that was saved in the `state.json` file. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3258/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3257 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3257/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3257/comments | https://api.github.com/repos/huggingface/datasets/issues/3257/events | https://github.com/huggingface/datasets/issues/3257 | 1,052,118,365 | I_kwDODunzps4-tg1d | 3,257 | Use f-strings for string formatting | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | {
"login": "Mehdi2402",
"id": 56029953,
"node_id": "MDQ6VXNlcjU2MDI5OTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mehdi2402",
"html_url": "https://github.com/Mehdi2402",
"followers_url": "https://api.github.com/users/Mehdi2402/followers",
"following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions",
"organizations_url": "https://api.github.com/users/Mehdi2402/orgs",
"repos_url": "https://api.github.com/users/Mehdi2402/repos",
"events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mehdi2402/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Mehdi2402",
"id": 56029953,
"node_id": "MDQ6VXNlcjU2MDI5OTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mehdi2402",
"html_url": "https://github.com/Mehdi2402",
"followers_url": "https://api.github.com/users/Mehdi2402/followers",
"following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions",
"organizations_url": "https://api.github.com/users/Mehdi2402/orgs",
"repos_url": "https://api.github.com/users/Mehdi2402/repos",
"events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mehdi2402/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi, I would be glad to help with this. Is there anyone else working on it?",
"Hi, I would be glad to work on this too.",
"#self-assign",
"Hi @Carlosbogo,\r\n\r\nwould you be interested in replacing the `.format` and `%` syntax with f-strings in the modules in the `datasets` directory since @Mehdi2402 has opened a PR that does that for all the other directories?",
"Oh I see. I will be glad to help with the `datasets` directory then."
] | 1,636,732,935,000 | 1,637,165,918,000 | 1,637,165,918,000 | CONTRIBUTOR | null | null | null | f-strings offer better readability/performance than `str.format` and `%`, so we should use them in all places in our codebase unless there is good reason to keep the older syntax.
> **NOTE FOR CONTRIBUTORS**: To avoid large PRs and possible merge conflicts, do 1-3 modules per PR. Also, feel free to ignore the files located under `datasets/*`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3257/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3257/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3256 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3256/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3256/comments | https://api.github.com/repos/huggingface/datasets/issues/3256/events | https://github.com/huggingface/datasets/pull/3256 | 1,052,000,613 | PR_kwDODunzps4udTqg | 3,256 | asserts replaced by exception for text classification task with test. | {
"login": "manisnesan",
"id": 153142,
"node_id": "MDQ6VXNlcjE1MzE0Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/153142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manisnesan",
"html_url": "https://github.com/manisnesan",
"followers_url": "https://api.github.com/users/manisnesan/followers",
"following_url": "https://api.github.com/users/manisnesan/following{/other_user}",
"gists_url": "https://api.github.com/users/manisnesan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manisnesan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manisnesan/subscriptions",
"organizations_url": "https://api.github.com/users/manisnesan/orgs",
"repos_url": "https://api.github.com/users/manisnesan/repos",
"events_url": "https://api.github.com/users/manisnesan/events{/privacy}",
"received_events_url": "https://api.github.com/users/manisnesan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Haha it looks like you got the chance of being reviewed twice at the same time and got the same suggestion twice x)\r\nAnyway it's all good now so we can merge !",
"Thanks for the feedback. "
] | 1,636,725,936,000 | 1,636,729,773,000 | 1,636,729,172,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3256",
"html_url": "https://github.com/huggingface/datasets/pull/3256",
"diff_url": "https://github.com/huggingface/datasets/pull/3256.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3256.patch",
"merged_at": 1636729172000
} | I have replaced only a single assert in text_classification.py along with a unit test to verify an exception is raised based on https://github.com/huggingface/datasets/issues/3171 .
I would like to first understand the code contribution workflow. So keeping the change to a single file rather than making too many changes. Once this gets approved, I will look into the rest.
Thanks. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3256/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3256/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3255 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3255/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3255/comments | https://api.github.com/repos/huggingface/datasets/issues/3255/events | https://github.com/huggingface/datasets/issues/3255 | 1,051,783,129 | I_kwDODunzps4-sO_Z | 3,255 | SciELO dataset ConnectionError | {
"login": "WojciechKusa",
"id": 2575047,
"node_id": "MDQ6VXNlcjI1NzUwNDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2575047?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WojciechKusa",
"html_url": "https://github.com/WojciechKusa",
"followers_url": "https://api.github.com/users/WojciechKusa/followers",
"following_url": "https://api.github.com/users/WojciechKusa/following{/other_user}",
"gists_url": "https://api.github.com/users/WojciechKusa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WojciechKusa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WojciechKusa/subscriptions",
"organizations_url": "https://api.github.com/users/WojciechKusa/orgs",
"repos_url": "https://api.github.com/users/WojciechKusa/repos",
"events_url": "https://api.github.com/users/WojciechKusa/events{/privacy}",
"received_events_url": "https://api.github.com/users/WojciechKusa/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,636,711,034,000 | 1,637,085,322,000 | 1,637,085,322,000 | NONE | null | null | null | ## Describe the bug
I get `ConnectionError` when I am trying to load the SciELO dataset.
When I try the URL with `requests` I get:
```
>>> requests.head("https://ndownloader.figstatic.com/files/14019287")
<Response [302]>
```
And as far as I understand redirections in `datasets` are not supported for downloads.
https://github.com/huggingface/datasets/blob/807341d0db0728073ab605c812c67f927d148f38/datasets/scielo/scielo.py#L45
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("scielo", "en-es")
```
## Expected results
Download SciELO dataset and load Dataset object
## Actual results
```
Downloading and preparing dataset scielo/en-es (download: 21.90 MiB, generated: 68.45 MiB, post-processed: Unknown size, total: 90.35 MiB) to /Users/test/.cache/huggingface/datasets/scielo/en-es/1.0.0/7e05d55a20257efeb9925ff5de65bd4884fc6ddb6d765f1ea3e8860449d90e0e...
Traceback (most recent call last):
File "scielo.py", line 3, in <module>
dataset = load_dataset("scielo", "en-es")
File "../lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File "../lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "../lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/Users/test/.cache/huggingface/modules/datasets_modules/datasets/scielo/7e05d55a20257efeb9925ff5de65bd4884fc6ddb6d765f1ea3e8860449d90e0e/scielo.py", line 77, in _split_generators
data_dir = dl_manager.download_and_extract(_URLS[self.config.name])
File "../lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract
return self.extract(self.download(url_or_urls))
File "../lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download
downloaded_path_or_paths = map_nested(
File "../lib/python3.8/site-packages/datasets/utils/py_utils.py", line 206, in map_nested
return function(data_struct)
File "../lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "../lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path
output_path = get_from_cache(
File "../lib/python3.8/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://ndownloader.figstatic.com/files/14019287
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.15.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.12
- PyArrow version: 6.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3255/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3255/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3254 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3254/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3254/comments | https://api.github.com/repos/huggingface/datasets/issues/3254/events | https://github.com/huggingface/datasets/pull/3254 | 1,051,351,172 | PR_kwDODunzps4ubPwR | 3,254 | Update xcopa dataset (fix checksum issues + add translated data) | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The CI failures are unrelated to the changes (missing fields in the readme and the CER metric error fixed in #3252)."
] | 1,636,663,893,000 | 1,636,713,058,000 | 1,636,713,057,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3254",
"html_url": "https://github.com/huggingface/datasets/pull/3254",
"diff_url": "https://github.com/huggingface/datasets/pull/3254.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3254.patch",
"merged_at": 1636713057000
} | This PR updates the checksums (as reported [here](https://discuss.huggingface.co/t/how-to-load-dataset-locally/11601/2)) of the `xcopa` dataset. Additionally, it adds new configs that hold the translated data of the original set of configs. This data was not available at the time of adding this dataset to the lib. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3254/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3253 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3253/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3253/comments | https://api.github.com/repos/huggingface/datasets/issues/3253/events | https://github.com/huggingface/datasets/issues/3253 | 1,051,308,972 | I_kwDODunzps4-qbOs | 3,253 | `GeneratorBasedBuilder` does not support `None` values | {
"login": "pavel-lexyr",
"id": 69010336,
"node_id": "MDQ6VXNlcjY5MDEwMzM2",
"avatar_url": "https://avatars.githubusercontent.com/u/69010336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pavel-lexyr",
"html_url": "https://github.com/pavel-lexyr",
"followers_url": "https://api.github.com/users/pavel-lexyr/followers",
"following_url": "https://api.github.com/users/pavel-lexyr/following{/other_user}",
"gists_url": "https://api.github.com/users/pavel-lexyr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pavel-lexyr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pavel-lexyr/subscriptions",
"organizations_url": "https://api.github.com/users/pavel-lexyr/orgs",
"repos_url": "https://api.github.com/users/pavel-lexyr/repos",
"events_url": "https://api.github.com/users/pavel-lexyr/events{/privacy}",
"received_events_url": "https://api.github.com/users/pavel-lexyr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi,\r\n\r\nthanks for reporting and providing a minimal reproducible example. \r\n\r\nThis line of the PR I've linked in our discussion on the Forum will add support for `None` values:\r\nhttps://github.com/huggingface/datasets/blob/a53de01842aac65c66a49b2439e18fa93ff73ceb/src/datasets/features/features.py#L835\r\n\r\nI expect that PR to be merged soon."
] | 1,636,660,281,000 | 1,639,060,018,000 | 1,639,060,018,000 | NONE | null | null | null | ## Describe the bug
`GeneratorBasedBuilder` does not support `None` values.
## Steps to reproduce the bug
See [this repository](https://github.com/pavel-lexyr/huggingface-datasets-bug-reproduction) for minimal reproduction.
## Expected results
Dataset is initialized with a `None` value in the `value` column.
## Actual results
```
Traceback (most recent call last):
File "main.py", line 3, in <module>
datasets.load_dataset("./bad-data")
File ".../datasets/load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File ".../datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File ".../datasets/builder.py", line 697, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File ".../datasets/builder.py", line 1103, in _prepare_split
example = self.info.features.encode_example(record)
File ".../datasets/features/features.py", line 1033, in encode_example
return encode_nested_example(self, example)
File ".../datasets/features/features.py", line 808, in encode_nested_example
return {
File ".../datasets/features/features.py", line 809, in <dictcomp>
k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
File ".../datasets/features/features.py", line 855, in encode_nested_example
return schema.encode_example(obj)
File ".../datasets/features/features.py", line 299, in encode_example
return float(value)
TypeError: float() argument must be a string or a number, not 'NoneType'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.15.1
- Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 6.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3253/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3253/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3252 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3252/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3252/comments | https://api.github.com/repos/huggingface/datasets/issues/3252/events | https://github.com/huggingface/datasets/pull/3252 | 1,051,124,749 | PR_kwDODunzps4uagoy | 3,252 | Fix failing CER metric test in CI after update | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,646,236,000 | 1,636,726,004,000 | 1,636,726,003,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3252",
"html_url": "https://github.com/huggingface/datasets/pull/3252",
"diff_url": "https://github.com/huggingface/datasets/pull/3252.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3252.patch",
"merged_at": 1636726003000
} | Fixes the [failing CER metric test](https://app.circleci.com/pipelines/github/huggingface/datasets/8644/workflows/79816553-fa2f-4756-b022-d5937f00bf7b/jobs/53298) in CI by adding support for `jiwer==2.3.0`, which was released yesterday. Also, I verified that all the tests in `metrics/cer/test_cer.py` pass after the change, so the results should be the same irrespective of the `jiwer` version. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3252/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3252/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3250 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3250/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3250/comments | https://api.github.com/repos/huggingface/datasets/issues/3250/events | https://github.com/huggingface/datasets/pull/3250 | 1,050,541,348 | PR_kwDODunzps4uYmkr | 3,250 | Add ETHICS dataset | {
"login": "ssss1029",
"id": 7088559,
"node_id": "MDQ6VXNlcjcwODg1NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7088559?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ssss1029",
"html_url": "https://github.com/ssss1029",
"followers_url": "https://api.github.com/users/ssss1029/followers",
"following_url": "https://api.github.com/users/ssss1029/following{/other_user}",
"gists_url": "https://api.github.com/users/ssss1029/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ssss1029/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ssss1029/subscriptions",
"organizations_url": "https://api.github.com/users/ssss1029/orgs",
"repos_url": "https://api.github.com/users/ssss1029/repos",
"events_url": "https://api.github.com/users/ssss1029/events{/privacy}",
"received_events_url": "https://api.github.com/users/ssss1029/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"Thanks for your contribution, @ssss1029. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] | 1,636,602,334,000 | 1,664,789,845,000 | 1,664,789,845,000 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3250",
"html_url": "https://github.com/huggingface/datasets/pull/3250",
"diff_url": "https://github.com/huggingface/datasets/pull/3250.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3250.patch",
"merged_at": null
} | This PR adds the ETHICS dataset, including all 5 sub-datasets.
From https://arxiv.org/abs/2008.02275 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3250/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/3250/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3249 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3249/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3249/comments | https://api.github.com/repos/huggingface/datasets/issues/3249/events | https://github.com/huggingface/datasets/pull/3249 | 1,050,193,138 | PR_kwDODunzps4uXeea | 3,249 | Fix streaming for id_newspapers_2018 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,570,530,000 | 1,636,725,692,000 | 1,636,725,691,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3249",
"html_url": "https://github.com/huggingface/datasets/pull/3249",
"diff_url": "https://github.com/huggingface/datasets/pull/3249.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3249.patch",
"merged_at": 1636725691000
} | To be compatible with streaming, this dataset must use `dl_manager.iter_archive` since the data are in a .tgz file | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3249/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3249/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3248 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3248/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3248/comments | https://api.github.com/repos/huggingface/datasets/issues/3248/events | https://github.com/huggingface/datasets/pull/3248 | 1,050,171,082 | PR_kwDODunzps4uXZzU | 3,248 | Stream from Google Drive and other hosts | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I just tried some datasets and noticed that `spider` is not working for some reason (the compression type is not recognized), resulting in FileNotFoundError. I can take a look tomorrow",
"I'm fixing the remaining files based on TAR archives",
"THANKS A LOT"
] | 1,636,569,152,000 | 1,638,288,223,000 | 1,636,737,491,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3248",
"html_url": "https://github.com/huggingface/datasets/pull/3248",
"diff_url": "https://github.com/huggingface/datasets/pull/3248.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3248.patch",
"merged_at": 1636737490000
} | Streaming from Google Drive is a bit more challenging than the other host we've been supporting:
- the download URL must be updated to add the confirm token obtained by HEAD request
- it requires to use cookies to keep the connection alive
- the URL doesn't tell any information about whether the file is compressed or not
Therefore I did two things:
- I added a step for URL and headers/cookies preparation in the StreamingDownloadManager
- I added automatic compression type inference by reading the [magic number](https://en.wikipedia.org/wiki/List_of_file_signatures)
This allows to do do fancy things like
```python
from datasets.utils.streaming_download_manager import StreamingDownloadManager, xopen, xjoin, xglob
# zip file containing a train.tsv file
url = "https://drive.google.com/uc?export=download&id=1k92sUfpHxKq8PXWRr7Y5aNHXwOCNUmqh"
extracted = StreamingDownloadManager().download_and_extract(url)
for inner_file in xglob(xjoin(extracted, "*.tsv")):
with xopen(inner_file) as f:
# streaming starts here
for line in f:
print(line)
```
This should make around 80 datasets streamable. It concerns those hosted on Google Drive but also any dataset for which the URL doesn't give any information about compression. Here is the full list:
```
amazon_polarity, ami, arabic_billion_words, ascent_kb, asset, big_patent, billsum, capes, cmrc2018, cnn_dailymail,
code_x_glue_cc_code_completion_token, code_x_glue_cc_code_refinement, code_x_glue_cc_code_to_code_trans,
code_x_glue_tt_text_to_text, conll2002, craigslist_bargains, dbpedia_14, docred, ehealth_kd, emo, euronews, germeval_14,
gigaword, grail_qa, great_code, has_part, head_qa, health_fact, hope_edi, id_newspapers_2018,
igbo_english_machine_translation, irc_disentangle, jfleg, jnlpba, journalists_questions, kor_ner, linnaeus, med_hop, mrqa,
mt_eng_vietnamese, multi_news, norwegian_ner, offcombr, offenseval_dravidian, para_pat, peoples_daily_ner, pn_summary,
poleval2019_mt, pubmed_qa, qangaroo, reddit_tifu, refresd, ro_sts_parallel, russian_super_glue, samsum, sberquad, scielo,
search_qa, species_800, spider, squad_adversarial, tamilmixsentiment, tashkeela, ted_talks_iwslt, trec, turk, turkish_ner,
twi_text_c3, universal_morphologies, web_of_science, weibo_ner, wiki_bio, wiki_hop, wiki_lingua, wiki_summary, wili_2018,
wisesight1000, wnut_17, yahoo_answers_topics, yelp_review_full, yoruba_text_c3
```
Some of them may not work if the host doesn't support HTTP range requests for example
Fix https://github.com/huggingface/datasets/issues/2742
Fix https://github.com/huggingface/datasets/issues/3188 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3248/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3248/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3247 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3247/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3247/comments | https://api.github.com/repos/huggingface/datasets/issues/3247/events | https://github.com/huggingface/datasets/issues/3247 | 1,049,699,088 | I_kwDODunzps4-kSMQ | 3,247 | Loading big json dataset raises pyarrow.lib.ArrowNotImplementedError | {
"login": "maxzirps",
"id": 29249513,
"node_id": "MDQ6VXNlcjI5MjQ5NTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/29249513?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxzirps",
"html_url": "https://github.com/maxzirps",
"followers_url": "https://api.github.com/users/maxzirps/followers",
"following_url": "https://api.github.com/users/maxzirps/following{/other_user}",
"gists_url": "https://api.github.com/users/maxzirps/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxzirps/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxzirps/subscriptions",
"organizations_url": "https://api.github.com/users/maxzirps/orgs",
"repos_url": "https://api.github.com/users/maxzirps/repos",
"events_url": "https://api.github.com/users/maxzirps/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxzirps/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi,\r\n\r\nthis issue is similar to https://github.com/huggingface/datasets/issues/3093, so you can either use the solution provided there or try to load the data in one chunk (you can control the chunk size by specifying the `chunksize` parameter (`int`) in `load_dataset`).\r\n\r\n@lhoestq Is this worth opening an issue on Jira? Basically, PyArrow doesn't allow casts that change the order of the struct fields because they treat `pa.struct` as an ordered sequence. Reordering fields manually in Python is probably too slow, so I think this needs to be fixed by them to be usable on our side.",
"I agree I would expect PyArrow to be able to handle this, do you want to open the issue @mariosasko ?\r\nAlthough maybe it's possible to fix struct casting on our side without hurting performance too much, if it's simply a matter of reordering the arrays in the StructArray",
"Fixed in #3575, so I'm closing this issue."
] | 1,636,543,079,000 | 1,649,599,557,000 | 1,649,599,557,000 | NONE | null | null | null | ## Describe the bug
When trying to create a dataset from a json file with around 25MB, the following error is raised `pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct`
Splitting the big file into smaller ones and then loading it with the `load_dataset` method did also not work.
Creating a pandas dataframe from it and then loading it with `Dataset.from_pandas` works
## Steps to reproduce the bug
```python
load_dataset("json", data_files="test.json")
```
test.json ~25MB
```json
{"a": {"c": 8, "b": 5}}
{"a": {"b": 7, "c": 6}}
{"a": {"c": 8, "b": 5}}
{"a": {"b": 7, "c": 6}}
{"a": {"c": 8, "b": 5}}
...
```
working.json ~160bytes
```json
{"a": {"c": 8, "b": 5}}
{"a": {"b": 7, "c": 6}}
{"a": {"c": 8, "b": 5}}
{"a": {"b": 7, "c": 6}}
{"a": {"c": 8, "b": 5}}
```
## Expected results
It should load the dataset from the json file without error.
## Actual results
It raises Exception `pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct`
```
Traceback (most recent call last):
File "/Users/m/workspace/xxx/project/main.py", line 60, in <module>
dataset = load_dataset("json", data_files="result.json")
File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/load.py", line 1627, in load_dataset
builder_instance.download_and_prepare(
File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/builder.py", line 697, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/builder.py", line 1159, in _prepare_split
writer.write_table(table)
File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/arrow_writer.py", line 428, in write_table
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "pyarrow/table.pxi", line 1685, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 630, in pyarrow.lib._sanitize_arrays
File "pyarrow/array.pxi", line 338, in pyarrow.lib.asarray
File "pyarrow/table.pxi", line 304, in pyarrow.lib.ChunkedArray.cast
File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/pyarrow/compute.py", line 309, in cast
return call_function("cast", [arr], options)
File "pyarrow/_compute.pyx", line 528, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 327, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 120, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct
```
## Environment info
- `datasets` version: 1.14.0
- Platform: macOS-12.0.1-arm64-arm-64bit
- Python version: 3.9.7
- PyArrow version: 6.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3247/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3247/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3246 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3246/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3246/comments | https://api.github.com/repos/huggingface/datasets/issues/3246/events | https://github.com/huggingface/datasets/pull/3246 | 1,049,662,746 | PR_kwDODunzps4uVvaW | 3,246 | [tiny] fix typo in stream docs | {
"login": "nollied",
"id": 26421036,
"node_id": "MDQ6VXNlcjI2NDIxMDM2",
"avatar_url": "https://avatars.githubusercontent.com/u/26421036?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nollied",
"html_url": "https://github.com/nollied",
"followers_url": "https://api.github.com/users/nollied/followers",
"following_url": "https://api.github.com/users/nollied/following{/other_user}",
"gists_url": "https://api.github.com/users/nollied/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nollied/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nollied/subscriptions",
"organizations_url": "https://api.github.com/users/nollied/orgs",
"repos_url": "https://api.github.com/users/nollied/repos",
"events_url": "https://api.github.com/users/nollied/events{/privacy}",
"received_events_url": "https://api.github.com/users/nollied/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,540,802,000 | 1,636,542,639,000 | 1,636,542,639,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3246",
"html_url": "https://github.com/huggingface/datasets/pull/3246",
"diff_url": "https://github.com/huggingface/datasets/pull/3246.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3246.patch",
"merged_at": 1636542639000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3246/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3246/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3245 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3245/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3245/comments | https://api.github.com/repos/huggingface/datasets/issues/3245/events | https://github.com/huggingface/datasets/pull/3245 | 1,048,726,062 | PR_kwDODunzps4uSqqq | 3,245 | Fix load_from_disk temporary directory | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,470,915,000 | 1,636,471,852,000 | 1,636,471,851,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3245",
"html_url": "https://github.com/huggingface/datasets/pull/3245",
"diff_url": "https://github.com/huggingface/datasets/pull/3245.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3245.patch",
"merged_at": 1636471851000
} | `load_from_disk` uses `tempfile.TemporaryDirectory()` instead of our `get_temporary_cache_files_directory()` function. This can cause the temporary directory to be deleted before the dataset object is garbage collected.
In practice, it prevents anyone from using methods like `shuffle` on a dataset loaded this way, because it can't write the shuffled indices in a directory that doesn't exist anymore.
In this PR I switch to using `get_temporary_cache_files_directory()` and I update the tests.
cc @mariosasko since you worked on `get_temporary_cache_files_directory()` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3245/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3245/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3244 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3244/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3244/comments | https://api.github.com/repos/huggingface/datasets/issues/3244/events | https://github.com/huggingface/datasets/pull/3244 | 1,048,675,741 | PR_kwDODunzps4uSgG5 | 3,244 | Fix filter method for batched=True | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,468,259,000 | 1,636,473,178,000 | 1,636,473,177,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3244",
"html_url": "https://github.com/huggingface/datasets/pull/3244",
"diff_url": "https://github.com/huggingface/datasets/pull/3244.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3244.patch",
"merged_at": 1636473177000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3244/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3244/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3243 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3243/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3243/comments | https://api.github.com/repos/huggingface/datasets/issues/3243/events | https://github.com/huggingface/datasets/pull/3243 | 1,048,630,754 | PR_kwDODunzps4uSWtB | 3,243 | Remove redundant isort module placement | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,465,830,000 | 1,636,725,765,000 | 1,636,725,765,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3243",
"html_url": "https://github.com/huggingface/datasets/pull/3243",
"diff_url": "https://github.com/huggingface/datasets/pull/3243.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3243.patch",
"merged_at": 1636725765000
} | `isort` can place modules by itself from [version 5.0.0](https://pycqa.github.io/isort/docs/upgrade_guides/5.0.0.html#module-placement-changes-known_third_party-known_first_party-default_section-etc) onwards, making the `known_first_party` and `known_third_party` fields in `setup.cfg` redundant (this is why our CI works, even though we haven't touched these options in a while). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3243/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3243/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3242 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3242/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3242/comments | https://api.github.com/repos/huggingface/datasets/issues/3242/events | https://github.com/huggingface/datasets/issues/3242 | 1,048,527,232 | I_kwDODunzps4-f0GA | 3,242 | Adding ANERcorp-CAMeLLab dataset | {
"login": "vitalyshalumov",
"id": 33824221,
"node_id": "MDQ6VXNlcjMzODI0MjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/33824221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vitalyshalumov",
"html_url": "https://github.com/vitalyshalumov",
"followers_url": "https://api.github.com/users/vitalyshalumov/followers",
"following_url": "https://api.github.com/users/vitalyshalumov/following{/other_user}",
"gists_url": "https://api.github.com/users/vitalyshalumov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vitalyshalumov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vitalyshalumov/subscriptions",
"organizations_url": "https://api.github.com/users/vitalyshalumov/orgs",
"repos_url": "https://api.github.com/users/vitalyshalumov/repos",
"events_url": "https://api.github.com/users/vitalyshalumov/events{/privacy}",
"received_events_url": "https://api.github.com/users/vitalyshalumov/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [
"Adding ANERcorp dataset\r\n\r\n## Adding a Dataset\r\n- **Name:** *ANERcorp-CAMeLLab*\r\n- **Description:** *Since its creation in 2008, the ANERcorp dataset (Benajiba & Rosso, 2008) has been a standard reference used by Arabic named entity recognition researchers around the world. However, over time, this dataset was copied over from user to user, modified slightly here and there, and split in many different configurations that made it hard to compare fairly across papers and systems.\r\n\r\nIn 2020, a group of researchers from CAMeL Lab (Habash, Alhafni and Oudah), and Mind Lab (Antoun and Baly) met with the creator of the corpus, Yassine Benajiba, to consult with him and collectively agree on an exact split, and accepted minor corrections from the original dataset. Bashar Alhafni from CAMeL Lab working with Nizar Habash implemented the decisions provided in this release.*\r\n\r\n- **Paper:** *(a) Benajiba, Yassine, Paolo Rosso, and José Miguel Benedí Ruiz. \"Anersys: An Arabic named entity recognition system based on maximum entropy.\" In International Conference on Intelligent Text Processing and Computational Linguistics, pp. 143-153. Springer, Berlin, Heidelberg, 2007.\r\n\r\n(b)Ossama Obeid, Nasser Zalmout, Salam Khalifa, Dima Taji, Mai Oudah, Bashar Alhafni, Go Inoue, Fadhl Eryani, Alexander Erdmann, and Nizar Habash. \"CAMeL Tools: An Open Source Python Toolkit, for Arabic Natural Language Processing.\" In Proceedings of the Conference on Language Resources and Evaluation (LREC 2020), Marseille, 2020.*\r\n- **Data:** *https://camel.abudhabi.nyu.edu/anercorp/*\r\n- **Motivation:** This is the standard dataset for evaluating NER performance in Arabic*\r\n\r\nInstructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md)."
] | 1,636,459,444,000 | 1,636,461,675,000 | null | NONE | null | null | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3242/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3241 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3241/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3241/comments | https://api.github.com/repos/huggingface/datasets/issues/3241/events | https://github.com/huggingface/datasets/pull/3241 | 1,048,461,852 | PR_kwDODunzps4uRzHa | 3,241 | Swap descriptions of v1 and raw-v1 configs of WikiText dataset and fix metadata | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,455,255,000 | 1,644,853,560,000 | 1,636,465,768,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3241",
"html_url": "https://github.com/huggingface/datasets/pull/3241",
"diff_url": "https://github.com/huggingface/datasets/pull/3241.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3241.patch",
"merged_at": 1636465768000
} | Fix #3237, fix #795. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3241/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3240 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3240/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3240/comments | https://api.github.com/repos/huggingface/datasets/issues/3240/events | https://github.com/huggingface/datasets/issues/3240 | 1,048,376,021 | I_kwDODunzps4-fPLV | 3,240 | Couldn't reach data file for disaster_response_messages | {
"login": "pandya6988",
"id": 81331791,
"node_id": "MDQ6VXNlcjgxMzMxNzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/81331791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pandya6988",
"html_url": "https://github.com/pandya6988",
"followers_url": "https://api.github.com/users/pandya6988/followers",
"following_url": "https://api.github.com/users/pandya6988/following{/other_user}",
"gists_url": "https://api.github.com/users/pandya6988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pandya6988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pandya6988/subscriptions",
"organizations_url": "https://api.github.com/users/pandya6988/orgs",
"repos_url": "https://api.github.com/users/pandya6988/repos",
"events_url": "https://api.github.com/users/pandya6988/events{/privacy}",
"received_events_url": "https://api.github.com/users/pandya6988/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"It looks like the dataset isn't available anymore on appen.com\r\n\r\nThe CSV files appear to still be available at https://www.kaggle.com/landlord/multilingual-disaster-response-messages though. It says that the data are under the CC0 license so I guess we can host the dataset elsewhere instead ?"
] | 1,636,450,002,000 | 1,639,492,709,000 | 1,639,492,709,000 | NONE | null | null | null | ## Describe the bug
Following command gives an ConnectionError.
## Steps to reproduce the bug
```python
disaster = load_dataset('disaster_response_messages')
```
## Error
```
ConnectionError: Couldn't reach https://datasets.appen.com/appen_datasets/disaster_response_data/disaster_response_messages_training.csv
```
## Expected results
It should load dataset without an error
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Google Colab
- Python version: 3.7
- PyArrow version:
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3240/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3239 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3239/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3239/comments | https://api.github.com/repos/huggingface/datasets/issues/3239/events | https://github.com/huggingface/datasets/issues/3239 | 1,048,360,232 | I_kwDODunzps4-fLUo | 3,239 | Inconsistent performance of the "arabic_billion_words" dataset | {
"login": "vitalyshalumov",
"id": 33824221,
"node_id": "MDQ6VXNlcjMzODI0MjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/33824221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vitalyshalumov",
"html_url": "https://github.com/vitalyshalumov",
"followers_url": "https://api.github.com/users/vitalyshalumov/followers",
"following_url": "https://api.github.com/users/vitalyshalumov/following{/other_user}",
"gists_url": "https://api.github.com/users/vitalyshalumov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vitalyshalumov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vitalyshalumov/subscriptions",
"organizations_url": "https://api.github.com/users/vitalyshalumov/orgs",
"repos_url": "https://api.github.com/users/vitalyshalumov/repos",
"events_url": "https://api.github.com/users/vitalyshalumov/events{/privacy}",
"received_events_url": "https://api.github.com/users/vitalyshalumov/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,636,449,060,000 | 1,636,449,060,000 | null | NONE | null | null | null | ## Describe the bug
When downloaded from macine 1 the dataset is downloaded and parsed correctly.
When downloaded from machine two (which has a different cache directory),
the following script:
import datasets
from datasets import load_dataset
raw_dataset_elkhair_1 = load_dataset('arabic_billion_words', 'Alittihad', split="train",download_mode='force_redownload')
gives the following error:
**Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: 1.49 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/arabic_billion_words/Alittihad/1.1.0/687a1f963284c8a766558661375ea8f7ab3fa3633f8cd9c9f42a53ebe83bfe17...
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 348M/348M [00:24<00:00, 14.0MB/s]
Traceback (most recent call last):
File ".../why_mismatch.py", line 3, in <module>
File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 709, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=1601790302, num_examples=349342, dataset_name='arabic_billion_words'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='arabic_billion_words')}]**
Note that the package versions of datasets (1.15.1) and rarfile (4.0) are identical.
## Steps to reproduce the bug
import datasets
from datasets import load_dataset
raw_dataset_elkhair_1 = load_dataset('arabic_billion_words', 'Alittihad', split="train",download_mode='force_redownload')
# Sample code to reproduce the bug
## Expected results
Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: 1.49 GiB, post-processed: Unknown size, total: 1.82 GiB) to .../.cache/huggingface/datasets/arabic_billion_words/Alittihad/1.1.0/687a1f963284c8a766558661375ea8f7ab3fa3633f8cd9c9f42a53ebe83bfe17...
Downloading: 100%|███████████████████████████| 348M/348M [00:22<00:00, 15.8MB/s]
Dataset arabic_billion_words downloaded and prepared to .../.cache/huggingface/datasets/arabic_billion_words/Alittihad/1.1.0/687a1f963284c8a766558661375ea8f7ab3fa3633f8cd9c9f42a53ebe83bfe17. Subsequent calls will reuse this data.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
Machine 1:
- `datasets` version: 1.15.1
- Platform: Linux-5.8.0-63-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 4.0.1
Machine 2 (the bugged one)
- `datasets` version: 1.15.1
- Platform: Linux-4.4.0-210-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyArrow version: 6.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3239/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3238 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3238/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3238/comments | https://api.github.com/repos/huggingface/datasets/issues/3238/events | https://github.com/huggingface/datasets/issues/3238 | 1,048,226,086 | I_kwDODunzps4-eqkm | 3,238 | Reuters21578 Couldn't reach | {
"login": "TingNLP",
"id": 54096137,
"node_id": "MDQ6VXNlcjU0MDk2MTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/54096137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TingNLP",
"html_url": "https://github.com/TingNLP",
"followers_url": "https://api.github.com/users/TingNLP/followers",
"following_url": "https://api.github.com/users/TingNLP/following{/other_user}",
"gists_url": "https://api.github.com/users/TingNLP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TingNLP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TingNLP/subscriptions",
"organizations_url": "https://api.github.com/users/TingNLP/orgs",
"repos_url": "https://api.github.com/users/TingNLP/repos",
"events_url": "https://api.github.com/users/TingNLP/events{/privacy}",
"received_events_url": "https://api.github.com/users/TingNLP/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"Hi ! The URL works fine on my side today, could you try again ?",
"thank you @lhoestq \r\nit works"
] | 1,636,438,136,000 | 1,636,588,977,000 | 1,636,588,977,000 | NONE | null | null | null | ``## Adding a Dataset
- **Name:** *Reuters21578*
- **Description:** *ConnectionError: Couldn't reach https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz*
- **Data:** *https://huggingface.co/datasets/reuters21578*
`from datasets import load_dataset`
`dataset = load_dataset("reuters21578", 'ModLewis')`
ConnectionError: Couldn't reach https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz
And I try to request the link as follow:
`import requests`
`requests.head('https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz')`
SSLError: HTTPSConnectionPool(host='kdd.ics.uci.edu', port=443): Max retries exceeded with url: /databases/reuters21578/reuters21578.tar.gz (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)'),))
This problem likes #575
What should I do ?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3238/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3237 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3237/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3237/comments | https://api.github.com/repos/huggingface/datasets/issues/3237/events | https://github.com/huggingface/datasets/issues/3237 | 1,048,165,525 | I_kwDODunzps4-ebyV | 3,237 | wikitext description wrong | {
"login": "hongyuanmei",
"id": 19693633,
"node_id": "MDQ6VXNlcjE5NjkzNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/19693633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hongyuanmei",
"html_url": "https://github.com/hongyuanmei",
"followers_url": "https://api.github.com/users/hongyuanmei/followers",
"following_url": "https://api.github.com/users/hongyuanmei/following{/other_user}",
"gists_url": "https://api.github.com/users/hongyuanmei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hongyuanmei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hongyuanmei/subscriptions",
"organizations_url": "https://api.github.com/users/hongyuanmei/orgs",
"repos_url": "https://api.github.com/users/hongyuanmei/repos",
"events_url": "https://api.github.com/users/hongyuanmei/events{/privacy}",
"received_events_url": "https://api.github.com/users/hongyuanmei/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @hongyuanmei, thanks for reporting.\r\n\r\nI'm fixing it.",
"Duplicate of:\r\n- #795"
] | 1,636,430,812,000 | 1,644,853,511,000 | 1,636,465,768,000 | NONE | null | null | null | ## Describe the bug
Descriptions of the wikitext datasests are wrong.
## Steps to reproduce the bug
Please see: https://github.com/huggingface/datasets/blob/f6dcafce996f39b6a4bbe3a9833287346f4a4b68/datasets/wikitext/wikitext.py#L50
## Expected results
The descriptions for raw-v1 and v1 should be switched. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3237/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3237/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3236 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3236/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3236/comments | https://api.github.com/repos/huggingface/datasets/issues/3236/events | https://github.com/huggingface/datasets/issues/3236 | 1,048,026,358 | I_kwDODunzps4-d5z2 | 3,236 | Loading of datasets changed in #3110 returns no examples | {
"login": "eladsegal",
"id": 13485709,
"node_id": "MDQ6VXNlcjEzNDg1NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eladsegal",
"html_url": "https://github.com/eladsegal",
"followers_url": "https://api.github.com/users/eladsegal/followers",
"following_url": "https://api.github.com/users/eladsegal/following{/other_user}",
"gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions",
"organizations_url": "https://api.github.com/users/eladsegal/orgs",
"repos_url": "https://api.github.com/users/eladsegal/repos",
"events_url": "https://api.github.com/users/eladsegal/events{/privacy}",
"received_events_url": "https://api.github.com/users/eladsegal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @eladsegal, thanks for reporting.\r\n\r\nI am sorry, but I can't reproduce the bug:\r\n```\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"qasper\")\r\nDownloading: 5.11kB [00:00, ?B/s]\r\nDownloading and preparing dataset qasper/qasper (download: 9.88 MiB, generated: 35.11 MiB, post-processed: Unknown size, total: 44.99 MiB) to .cache\\qasper\\qasper\\0.1.0\\b99154d2a15aa54bfc669f82b2eda715a2e342e81023d39613b0e2920fdb3ad8...\r\nDataset qasper downloaded and prepared to .cache\\qasper\\qasper\\0.1.0\\b99154d2a15aa54bfc669f82b2eda715a2e342e81023d39613b0e2920fdb3ad8. Subsequent calls will reuse this data.\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<?, ?it/s]\r\n\r\nIn [3]: ds\r\nOut[3]:\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'title', 'abstract', 'full_text', 'qas'],\r\n num_rows: 888\r\n })\r\n validation: Dataset({\r\n features: ['id', 'title', 'abstract', 'full_text', 'qas'],\r\n num_rows: 281\r\n })\r\n})\r\n``` \r\n\r\nThis makes me suspect that the origin of the problem might be the cache: I didn't have this dataset in my cache, although I guess you already had it, before the code change introduced by #3110.\r\n\r\n@lhoestq might it be possible that the code change introduced by #3110 makes \"inaccessible\" all previously cached TAR-based datasets?\r\n- Before the caching system downloaded and extracted the tar dataset\r\n- Now it only downloads the tar dataset (no extraction is done)",
"I can't reproduce either in my environment (macos, python 3.7).\r\n\r\nIn your case it generates zero examples. This can only happen if the extraction of the TAR archive doesn't output the right filenames. Indeed if the `qasper` script can't find the right file to load, it's currently ignored and it returns zero examples. This case was not even considered when #3110 was developed since we considered the file names to be deterministic - and not depend on your environment.\r\n\r\nTherefore here is my hypothesis:\r\n- either the cache is corrupted somehow with an empty TAR archive\r\n- OR I suspect that the issue comes from python 3.8\r\n",
"I just tried again on python 3.8 and I was able to reproduce the issue. Let me work on a fix",
"Ok I found the issue. It's not related to python 3.8 in itself though. This issue happens because your local installation of `datasets` is outdated compared to the changes to datasets in #3110\r\n\r\nTo fix this you just have to pull the latest changes from `master` :)\r\n\r\nLet me know if that helps !\r\n\r\n--------------\r\n\r\nHere are more details about my investigation:\r\n\r\nIt's possible to reproduce this issue if you use `datasets<=1.15.1` or before b6469baa22c174b3906c631802a7016fedea6780 and if you load the dataset after revision b6469baa22c174b3906c631802a7016fedea6780. This is because `dl_manager.iter_archive` had issues at that time (and it was not used anywhere anyway).\r\n\r\nIn particular it was returning the absolute path to extracted files instead of the relative path of the file inside the archive. This was an issue because `dl_manager.iter_archive` isn't supposed to extract the TAR archive. Instead, it iterates over all the files inside the archive, without creating a directory with the extracted content.\r\n\r\nTherefore if you want to use the datasets on `master`, make sure that you have an up-to-date local installation of `datasets` as well, or you may face incompatibilities like this.",
"Thanks!\r\nBut what about code that is already using older version of datasets? \r\nThe reason I encountered this issue was that suddenly one of my repos with version 1.12.1 started getting 0 examples.\r\nI handled it by adding `revision` to `load_dataset`, but I guess it would still be an issue for other users who doesn't know this.",
"Hi, in 1.12.1 it uses the dataset scripts from that time, not the one on master.\r\n\r\nIt only uses the datasets from master if you installed `datasets` from source, or if the dataset isn't available in your local version (in this case it shows a warning and it loads from master).\r\n",
"OK, I understand the issue a bit better now.\r\nI see I wasn't on 1.12.1, but on 1.12.1.dev0 and since it is a dev version it uses master.\r\nSo users that use an old dev version must specify revision or else they'll encounter this problem.\r\n\r\nBTW, when I opened the issue I installed the latest master version with\r\n```\r\npip install git+git://github.com/huggingface/datasets@master#egg=datasets\r\n```\r\nand also used `download_mode=\"force_redownload\"`, and it still returned 0 examples.\r\nNow I deleted all of the cache and ran the code again, and it worked.\r\nI'm not sure what exactly happened here, but looks like it was due to a mix of an unofficial version and its cache.\r\n\r\nThanks again!"
] | 1,636,414,186,000 | 1,636,476,365,000 | 1,636,476,347,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
Loading of datasets changed in https://github.com/huggingface/datasets/pull/3110 returns no examples:
```python
DatasetDict({
train: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 0
})
validation: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 0
})
})
```
## Steps to reproduce the bug
Load any of the datasets that were changed in https://github.com/huggingface/datasets/pull/3110:
```python
from datasets import load_dataset
load_dataset("qasper")
# The problem only started with the commit of #3110
load_dataset("qasper", revision="b6469baa22c174b3906c631802a7016fedea6780")
```
## Expected results
```python
DatasetDict({
train: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 888
})
validation: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 281
})
})
```
Which can be received when specifying revision of the commit before https://github.com/huggingface/datasets/pull/3110:
```python
from datasets import load_dataset
load_dataset("qasper", revision="acfe2abda1ca79f0ce5c1896aa83b4b78af76b7d")
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.15.2.dev0 (master)
- Python version: 3.8.10
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3236/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3236/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3235 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3235/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3235/comments | https://api.github.com/repos/huggingface/datasets/issues/3235/events | https://github.com/huggingface/datasets/pull/3235 | 1,047,808,263 | PR_kwDODunzps4uPr9Z | 3,235 | Addd options to use updated bleurt checkpoints | {
"login": "jaehlee",
"id": 11873078,
"node_id": "MDQ6VXNlcjExODczMDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/11873078?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaehlee",
"html_url": "https://github.com/jaehlee",
"followers_url": "https://api.github.com/users/jaehlee/followers",
"following_url": "https://api.github.com/users/jaehlee/following{/other_user}",
"gists_url": "https://api.github.com/users/jaehlee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaehlee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaehlee/subscriptions",
"organizations_url": "https://api.github.com/users/jaehlee/orgs",
"repos_url": "https://api.github.com/users/jaehlee/repos",
"events_url": "https://api.github.com/users/jaehlee/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaehlee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,397,634,000 | 1,636,725,928,000 | 1,636,725,928,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3235",
"html_url": "https://github.com/huggingface/datasets/pull/3235",
"diff_url": "https://github.com/huggingface/datasets/pull/3235.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3235.patch",
"merged_at": 1636725928000
} | Adds options to use newer recommended checkpoint (as of 2021/10/8) bleurt-20 and its distilled versions.
Updated checkpoints are described in https://github.com/google-research/bleurt/blob/master/checkpoints.md#the-recommended-checkpoint-bleurt-20
This change won't affect the default behavior of metrics/bleurt. It only adds option to load newer checkpoints as
`datasets.load_metric('bleurt', 'bleurt-20')`
`bluert-20` generates scores roughly between 0 and 1, which wasn't the case for the previous checkpoints. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3235/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3235/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3234 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3234/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3234/comments | https://api.github.com/repos/huggingface/datasets/issues/3234/events | https://github.com/huggingface/datasets/pull/3234 | 1,047,634,236 | PR_kwDODunzps4uPHRk | 3,234 | Avoid PyArrow type optimization if it fails | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"That's good to have a way to disable this easily :)\r\nI just find it a bit unfortunate that users would have to experience the error once and then do `DISABLE_PYARROW_TYPES_OPTIMIZATION=1`. Do you know if there's a way to simply fallback on disabling it automatically when it fails ?",
"@lhoestq Actually, I agree a fallback makes more sense. The current approach is not very practical indeed and would require a mention in the docs.\r\n",
"Replaced the env variable with a fallback!",
"Hmm if the fallback automatically happens without the user knowing it, then I don't think we really need to mention it. But if you really wanted to, I think the [Improve performance](https://huggingface.co/docs/datasets/cache.html#improve-performance) section would be a great place for it! ",
"Yea I think this could just end up in a note that says that `datasets` automatically picks the most optimized integer precision for your tokenized text data to save you disk space. Maybe later if we have a page on text processing we can add this note, but for now I agree it doesn't fit well into the doc.\r\n\r\nIn particular in the \"Improve performance\" section we mention what users can do to speed up their computations, while this behavior is just some internal feature that users don't have control over anyway."
] | 1,636,387,827,000 | 1,636,545,869,000 | 1,636,545,868,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3234",
"html_url": "https://github.com/huggingface/datasets/pull/3234",
"diff_url": "https://github.com/huggingface/datasets/pull/3234.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3234.patch",
"merged_at": 1636545868000
} | Adds a new variable, `DISABLE_PYARROW_TYPES_OPTIMIZATION`, to `config.py` for easier control of the Arrow type optimization.
Fix #2206 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3234/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3234/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3233 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3233/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3233/comments | https://api.github.com/repos/huggingface/datasets/issues/3233/events | https://github.com/huggingface/datasets/pull/3233 | 1,047,474,931 | PR_kwDODunzps4uOl9- | 3,233 | Improve repository structure docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,379,495,000 | 1,636,452,138,000 | 1,636,452,137,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3233",
"html_url": "https://github.com/huggingface/datasets/pull/3233",
"diff_url": "https://github.com/huggingface/datasets/pull/3233.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3233.patch",
"merged_at": 1636452137000
} | Continuation of the documentation started in https://github.com/huggingface/datasets/pull/3221, taking into account @stevhliu 's comments | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3233/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3233/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3232 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3232/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3232/comments | https://api.github.com/repos/huggingface/datasets/issues/3232/events | https://github.com/huggingface/datasets/issues/3232 | 1,047,361,573 | I_kwDODunzps4-bXgl | 3,232 | The Xsum datasets seems not able to download. | {
"login": "FYYFU",
"id": 37999885,
"node_id": "MDQ6VXNlcjM3OTk5ODg1",
"avatar_url": "https://avatars.githubusercontent.com/u/37999885?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FYYFU",
"html_url": "https://github.com/FYYFU",
"followers_url": "https://api.github.com/users/FYYFU/followers",
"following_url": "https://api.github.com/users/FYYFU/following{/other_user}",
"gists_url": "https://api.github.com/users/FYYFU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FYYFU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FYYFU/subscriptions",
"organizations_url": "https://api.github.com/users/FYYFU/orgs",
"repos_url": "https://api.github.com/users/FYYFU/repos",
"events_url": "https://api.github.com/users/FYYFU/events{/privacy}",
"received_events_url": "https://api.github.com/users/FYYFU/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! On my side the URL is working fine, could you try again ?",
"> Hi ! On my side the URL is working fine, could you try again ?\r\n\r\nI try it again and cannot download the file (might because of my location). Could you please provide another download link(such as google drive)? :>",
"I don't know other download links - this is the one provided by the authors of the dataset. Maybe you can try downloading from another location ? There are several solutions: a VPN, a remote VM or Google Colab for example.",
"> I don't know other download links - this is the one provided by the authors of the dataset. Maybe you can try downloading from another location ? There are several solutions: a VPN, a remote VM or Google Colab for example.\r\n\r\n:> ok. Thanks for your reply."
] | 1,636,372,734,000 | 1,636,470,436,000 | 1,636,470,436,000 | NONE | null | null | null | ## Describe the bug
The download Link of the Xsum dataset provided in the repository is [Link](http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz). It seems not able to download.
## Steps to reproduce the bug
```python
load_dataset('xsum')
```
## Actual results
``` python
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3232/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3232/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3231 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3231/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3231/comments | https://api.github.com/repos/huggingface/datasets/issues/3231/events | https://github.com/huggingface/datasets/pull/3231 | 1,047,170,906 | PR_kwDODunzps4uNmWT | 3,231 | Group tests in multiprocessing workers by test file | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,361,163,000 | 1,636,377,558,000 | 1,636,361,984,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3231",
"html_url": "https://github.com/huggingface/datasets/pull/3231",
"diff_url": "https://github.com/huggingface/datasets/pull/3231.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3231.patch",
"merged_at": 1636361983000
} | By grouping tests by test file, we make sure that all the tests in `test_load.py` are sent to the same worker.
Therefore, the fixture `hf_token` will be called only once (and from the same worker).
Related to: #3200.
Fix #3219. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3231/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3231/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3230 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3230/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3230/comments | https://api.github.com/repos/huggingface/datasets/issues/3230/events | https://github.com/huggingface/datasets/pull/3230 | 1,047,135,583 | PR_kwDODunzps4uNfEd | 3,230 | Add full tagset to conll2003 README | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I also added the missing `pretty_name` tag in the dataset card to fix the CI"
] | 1,636,358,764,000 | 1,636,454,918,000 | 1,636,454,458,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3230",
"html_url": "https://github.com/huggingface/datasets/pull/3230",
"diff_url": "https://github.com/huggingface/datasets/pull/3230.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3230.patch",
"merged_at": 1636454458000
} | Even though it is possible to manually get the tagset list with
```python
dset.features[field_name].feature.names
```
I think it is useful to have an overview of the used tagset on the dataset card. This is particularly useful in light of the **dataset viewer**: the tags are encoded, so it is not immediately obvious what they are for a given sample. Adding a label-int mapping should make it easier for visitors to get a grasp of what they mean.
From user-experience perspective, I would urge the full tagsets to always be available in the README's but I understand that that would take a lot of work, probably. Perhaps it can be automated?
closes #3189 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3230/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3230/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3229 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3229/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3229/comments | https://api.github.com/repos/huggingface/datasets/issues/3229/events | https://github.com/huggingface/datasets/pull/3229 | 1,046,706,425 | PR_kwDODunzps4uMKsx | 3,229 | Fix URL in CITATION file | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,279,475,000 | 1,636,279,486,000 | 1,636,279,485,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3229",
"html_url": "https://github.com/huggingface/datasets/pull/3229",
"diff_url": "https://github.com/huggingface/datasets/pull/3229.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3229.patch",
"merged_at": 1636279485000
} | Currently the BibTeX citation parsed from the CITATION file has wrong URL (it shows the repo URL instead of the proceedings paper URL):
```
@inproceedings{Lhoest_Datasets_A_Community_2021,
author = {Lhoest, Quentin and Villanova del Moral, Albert and von Platen, Patrick and Wolf, Thomas and Šaško, Mario and Jernite, Yacine and Thakur, Abhishek and Tunstall, Lewis and Patil, Suraj and Drame, Mariama and Chaumond, Julien and Plu, Julien and Davison, Joe and Brandeis, Simon and Sanh, Victor and Le Scao, Teven and Canwen Xu, Kevin and Patry, Nicolas and Liu, Steven and McMillan-Major, Angelina and Schmid, Philipp and Gugger, Sylvain and Raw, Nathan and Lesage, Sylvain and Lozhkov, Anton and Carrigan, Matthew and Matussière, Théo and von Werra, Leandro and Debut, Lysandre and Bekman, Stas and Delangue, Clément},
booktitle = {Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
month = {11},
pages = {175--184},
publisher = {Association for Computational Linguistics},
title = {{Datasets: A Community Library for Natural Language Processing}},
url = {https://github.com/huggingface/datasets},
year = {2021}
}
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3229/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3229/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3228 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3228/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3228/comments | https://api.github.com/repos/huggingface/datasets/issues/3228/events | https://github.com/huggingface/datasets/pull/3228 | 1,046,702,143 | PR_kwDODunzps4uMJ58 | 3,228 | Add CITATION file | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,278,019,000 | 1,636,278,707,000 | 1,636,278,706,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3228",
"html_url": "https://github.com/huggingface/datasets/pull/3228",
"diff_url": "https://github.com/huggingface/datasets/pull/3228.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3228.patch",
"merged_at": 1636278706000
} | Add CITATION file. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3228/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3228/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3227 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3227/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3227/comments | https://api.github.com/repos/huggingface/datasets/issues/3227/events | https://github.com/huggingface/datasets/issues/3227 | 1,046,667,845 | I_kwDODunzps4-YuJF | 3,227 | Error in `Json(datasets.ArrowBasedBuilder)` class | {
"login": "JunShern",
"id": 7796965,
"node_id": "MDQ6VXNlcjc3OTY5NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7796965?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JunShern",
"html_url": "https://github.com/JunShern",
"followers_url": "https://api.github.com/users/JunShern/followers",
"following_url": "https://api.github.com/users/JunShern/following{/other_user}",
"gists_url": "https://api.github.com/users/JunShern/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JunShern/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JunShern/subscriptions",
"organizations_url": "https://api.github.com/users/JunShern/orgs",
"repos_url": "https://api.github.com/users/JunShern/repos",
"events_url": "https://api.github.com/users/JunShern/events{/privacy}",
"received_events_url": "https://api.github.com/users/JunShern/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"I have additionally identified the source of the error, being that [this condition](https://github.com/huggingface/datasets/blob/fc46bba66ba4f432cc10501c16a677112e13984c/src/datasets/packaged_modules/json/json.py#L124-L126) in the file\r\n`python3.8/site-packages/datasets/packaged_modules/json/json.py` is not being entered correctly:\r\n```python\r\n if (\r\n isinstance(e, pa.ArrowInvalid)\r\n and \"straddling\" not in str(e)\r\n or block_size > len(batch)\r\n ):\r\n```\r\n\r\nFrom what I can tell, in my case the block_size simply needs to be increased, but the error message does not contain \"straddling\" so the condition does trigger correctly and we fail to reach [the line to increase block_size](https://github.com/huggingface/datasets/blob/fc46bba66ba4f432cc10501c16a677112e13984c/src/datasets/packaged_modules/json/json.py#L135).\r\n\r\nChanging the condition above to simply\r\n```python\r\n if (\r\n block_size > len(batch)\r\n ):\r\n```\r\n\r\nFixes the error for me. I'm happy to create a PR containing this fix if the developers deem the other conditions unnecessary.",
"Hi ! I think the issue comes from the fact that your JSON file is not a valid JSON Lines file.\r\nEach example should be on one single line.\r\n\r\nCan you try fixing the format to have one line per example and try again ?",
":open_mouth: you're right, that did it! I just put everything on a single line (my file only has a single example) and that fixed the error. Thank you so much!"
] | 1,636,264,232,000 | 1,636,484,955,000 | 1,636,484,955,000 | NONE | null | null | null | ## Describe the bug
When a json file contains a `text` field that is larger than the block_size, the JSON dataset builder fails.
## Steps to reproduce the bug
Create a folder that contains the following:
```
.
├── testdata
│ └── mydata.json
└── test.py
```
Please download [this file](https://github.com/huggingface/datasets/files/7491797/mydata.txt) as `mydata.json`. (The error does not occur in JSON files with shorter text, but it is reproducible when the text is long as in the file I provide)
:exclamation: :exclamation: GitHub doesn't allow me to upload JSON so this file is a TXT, and you should rename it to `.json`!
`test.py` simply contains:
```python
from datasets import load_dataset
my_dataset = load_dataset("testdata")
```
To reproduce the error, simply run
```
python test.py
```
## Expected results
The data should load correctly without error.
## Actual results
The dataset builder fails with:
```
Using custom data configuration testdata-d490389b8ab4fd82
Downloading and preparing dataset json/testdata to /home/junshern.chan/.cache/huggingface/datasets/json/testdata-d490389b8ab4fd82/0.0.0/3333a8af0db9764dfcff43a42ff26228f0f2e267f0d8a0a294452d188beadb34...
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2264.74it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 447.01it/s]
Failed to read file '/home/junshern.chan/hf-json-bug/testdata/mydata.json' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Missing a name for object member. in row 0
Traceback (most recent call last):
File "test.py", line 28, in <module>
my_dataset = load_dataset("testdata")
File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/datasets/builder.py", line 697, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/datasets/builder.py", line 1156, in _prepare_split
for key, table in utils.tqdm(
File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/tqdm/std.py", line 1168, in __iter__
for obj in iterable:
File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/datasets/packaged_modules/json/json.py", line 146, in _generate_tables
raise ValueError(
ValueError: Not able to read records in the JSON file at /home/junshern.chan/hf-json-bug/testdata/mydata.json. You should probably indicate the field of the JSON file containing your records. This JSON file contain the following fields: ['text']. Select the correct one and provide it as `field='XXX'` to the dataset loading method.
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.15.1
- Platform: Linux-5.8.0-63-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 6.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3227/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3227/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3226 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3226/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3226/comments | https://api.github.com/repos/huggingface/datasets/issues/3226/events | https://github.com/huggingface/datasets/pull/3226 | 1,046,584,518 | PR_kwDODunzps4uL0ma | 3,226 | Fix paper BibTeX citation with proceedings reference | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,228,379,000 | 1,636,268,728,000 | 1,636,268,727,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3226",
"html_url": "https://github.com/huggingface/datasets/pull/3226",
"diff_url": "https://github.com/huggingface/datasets/pull/3226.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3226.patch",
"merged_at": 1636268727000
} | Fix paper BibTeX citation with proceedings reference. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3226/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3226/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3225 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3225/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3225/comments | https://api.github.com/repos/huggingface/datasets/issues/3225/events | https://github.com/huggingface/datasets/pull/3225 | 1,046,530,493 | PR_kwDODunzps4uLrB3 | 3,225 | Update tatoeba to v2021-07-22 | {
"login": "KoichiYasuoka",
"id": 15098598,
"node_id": "MDQ6VXNlcjE1MDk4NTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/15098598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KoichiYasuoka",
"html_url": "https://github.com/KoichiYasuoka",
"followers_url": "https://api.github.com/users/KoichiYasuoka/followers",
"following_url": "https://api.github.com/users/KoichiYasuoka/following{/other_user}",
"gists_url": "https://api.github.com/users/KoichiYasuoka/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KoichiYasuoka/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KoichiYasuoka/subscriptions",
"organizations_url": "https://api.github.com/users/KoichiYasuoka/orgs",
"repos_url": "https://api.github.com/users/KoichiYasuoka/repos",
"events_url": "https://api.github.com/users/KoichiYasuoka/events{/privacy}",
"received_events_url": "https://api.github.com/users/KoichiYasuoka/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"How about this? @lhoestq @abhishekkrthakur ",
"Hi ! I think it would be nice if people could still be able to load the old version.\r\nMaybe this can be a parameter ? For example to load the old version they could do\r\n```python\r\nload_dataset(\"tatoeba\", lang1=\"en\", lang2=\"mr\", date=\"v2020-11-09\")\r\n```\r\n\r\nIf it sounds good to you, we can add this parameter to the TatoebaConfig:\r\n```python\r\nclass TatoebaConfig(datasets.BuilderConfig):\r\n def __init__(self, *args, lang1=None, lang2=None, date=\"v2021-07-22\", **kwargs):\r\n self.date = date\r\n```\r\nand then pass the date to the URL\r\n```python\r\n_BASE_URL = \"https://object.pouta.csc.fi/OPUS-Tatoeba/{}/moses/{}-{}.txt.zip\"\r\n```\r\n```python\r\n def _base_url(lang1, lang2, date):\r\n return _BASE_URL.format(date, lang1, lang2)\r\n```\r\n\r\nWhat do you think ?",
"`_DATE = \"v\" + \"-\".join(s.zfill(2) for s in _VERSION.split(\".\"))` seems rather tricky but works well. How about this? @lhoestq \r\n",
"The CI is only failing because of the missing sections in the dataset card, and because of an issue with the CER metric that is unrelated to this PR"
] | 1,636,211,671,000 | 1,636,715,593,000 | 1,636,715,593,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3225",
"html_url": "https://github.com/huggingface/datasets/pull/3225",
"diff_url": "https://github.com/huggingface/datasets/pull/3225.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3225.patch",
"merged_at": 1636715593000
} | Tatoeba's latest version is v2021-07-22 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3225/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3225/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3224 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3224/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3224/comments | https://api.github.com/repos/huggingface/datasets/issues/3224/events | https://github.com/huggingface/datasets/pull/3224 | 1,046,495,831 | PR_kwDODunzps4uLk2q | 3,224 | User-pickling with dynamic sub-classing | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"@lhoestq Feel free to have a look. The implementation is slightly different from what you suggested. I have opted to overwrite `save` instead of meddling with `save_global`. `save_global` is called very late down in dill/pickle so it is hard to control for what is happening there. I might be wrong. Pickling is more complex than I thought! \r\n\r\nThe linked issue (`map` with spaCy) also works now!\r\n\r\n```python\r\nimport pickle\r\nimport spacy\r\nfrom spacy import Language\r\nfrom datasets import load_dataset\r\nfrom datasets.utils.py_utils import dumps, pklregister\r\n\r\n@pklregister(Language, allow_subclasses=True)\r\ndef hash_spacy_language(pickler, nlp: Language):\r\n pickler.save(nlp.to_bytes())\r\n\r\ndef main():\r\n fin = r\"large/file.txt\"\r\n nlp = spacy.load(\"en_core_web_sm\")\r\n\r\n def tokenize(l):\r\n return {\"tok\": [t.text for t in nlp(l[\"text\"])]}\r\n\r\n ds = load_dataset(\"text\", data_files=fin)\r\n ds = ds[\"train\"].map(tokenize)\r\n\r\n # Sanity check: load NLP from pickle created with our own `dumps`\r\n config = nlp.config\r\n lang_cls = spacy.util.get_lang_class(config[\"nlp\"][\"lang\"])\r\n nlp2 = lang_cls.from_config(config)\r\n nlp2.from_bytes(pickle.loads(dumps(nlp)))\r\n\r\n assert isinstance(nlp2, type(nlp))\r\n assert dumps(nlp) == dumps(nlp2)\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```\r\n\r\nIf this all looks good to you, I'll start writing on some documentation and examples.\r\n",
"One more thing. This is a reduction function for SpaCy Language that should work with the new API:\r\n```python\r\n@pklregister(Language, allow_subclasses=True)\r\ndef hash_spacy_language(pickler, obj):\r\n def create_language(config, bytes_data):\r\n lang_cls = spacy.util.get_lang_class(config[\"nlp\"][\"lang\"])\r\n nlp = lang_cls.from_config(config)\r\n return nlp.from_bytes(bytes_data)\r\n\r\n args = (obj.config, obj.to_bytes())\r\n pickler.save_reduce(create_language, args, obj=obj)\r\n```\r\nso IMO we are missing a test with `pickler.save_reduce`. ",
"> One more thing. This is a reduction function for SpaCy Language that should work with the new API:\r\n> \r\n> ```python\r\n> @pklregister(Language, allow_subclasses=True)\r\n> def hash_spacy_language(pickler, obj):\r\n> def create_language(config, bytes_data):\r\n> lang_cls = spacy.util.get_lang_class(config[\"nlp\"][\"lang\"])\r\n> nlp = lang_cls.from_config(config)\r\n> return nlp.from_bytes(bytes_data)\r\n> \r\n> args = (obj.config, obj.to_bytes())\r\n> pickler.save_reduce(create_language, args, obj=obj)\r\n> ```\r\n> \r\n> so IMO we are missing a test with `pickler.save_reduce`.\r\n\r\nSure that seems a good idea, but I do not quite understand what `save_reduce` does. Could you give some more info about what reduce functions do and how they differ from regular `save` and `save_global`? I've read about it but the docs nor the built-in `pickle` code seem really helpful.",
"I'm no pickle expect, but here is my understanding. I believe the pickler uses the reduce function when you do `loads` to reconstructs the original object from the parameters/arguments that were saved with `dumps`.\r\n\r\nFor example your sanity check could be simplified from\r\n```python\r\n config = nlp.config\r\n lang_cls = spacy.util.get_lang_class(config[\"nlp\"][\"lang\"])\r\n nlp2 = lang_cls.from_config(config)\r\n nlp2 = nlp2.from_bytes(pickle.loads(dumps(nlp)))\r\n```\r\nto\r\nEDIT: <s>pickle.loads(pickle.dumps(nlp))</s>\r\n```python\r\n nlp2 = loads(dumps(nlp)) # using our custom pickler\r\n```\r\n\r\nThough note that while it can be convenient for tests, we actually don't care about the reconstruction of the object since we're only using the pickler for `dumps` to compute hashes.",
"> I'm no pickle expect, but here is my understanding. I believe the pickler uses the reduce function when you do `loads` to reconstructs the original object from the parameters/arguments that were saved with `dumps`.\r\n> \r\n> For example your sanity check could be simplified from\r\n> \r\n> ```python\r\n> config = nlp.config\r\n> lang_cls = spacy.util.get_lang_class(config[\"nlp\"][\"lang\"])\r\n> nlp2 = lang_cls.from_config(config)\r\n> nlp2 = nlp2.from_bytes(pickle.loads(dumps(nlp)))\r\n> ```\r\n> \r\n> to\r\n> \r\n> ```python\r\n> nlp2 = pickle.loads(pickle.dumps(nlp))\r\n> ```\r\n> \r\n> Though note that while it can be convenient for tests, we actually don't care about the reconstruction of the object since we're only using the pickler for `dumps` to compute hashes.\r\n\r\nYes, the sanity check can be simplified like that _if_ we use `pickle.dumps` - but that would not test our own `dumps` functionality and would do a naive dump instead of using `to_bytes`. It won't work if we use our own `dumps`, exactly because of the reason that we want custom pickling and being able to call `to_bytes`. To reconstruct the object from the pickled bytes from `to_bytes` we need `from_bytes`. The result of pickle/dill loads will therefore always be a `bytes` object and not a `Language` object.\r\n\r\nBut `save_reduce` is called when saving, right? Not when loading, AFAICT. I am just not sure what exactly it is saving. It is _potentially_ called [at the end of `save`](https://github.com/python/cpython/blob/24af9a40a8f85af813ea89998aa4e931fcc78cd9/Lib/pickle.py#L603) but only if we haven't returned by then. I just can't figure out what that base case is.",
"I don't think we expect users to write the reduce function that isn't going to be used anyway. So maybe let's stick with `save` ?",
"@BramVanroy \r\nAs I understand `save_reduce` is very similar to `copyreg.pickle`, so I'd suggest you to check the following links:\r\n* https://docs.python.org/3/library/copyreg.html#copyreg.pickle\r\n* https://docs.python.org/3/library/pickle.html#object.__reduce__\r\n\r\n\r\n@lhoestq \r\n> I don't think we expect users to write the reduce function that isn't going to be used anyway. So maybe let's stick with save ?\r\n\r\nI agree. \r\n\r\n`save_reduce` is very similar to `copyreg.pickle` and `object.__reduce__`, which are part of public API (and `save` isn't), so I expect more advanced users to know how to write their own reduction functions. But, as you say, `pklregister` should also work with `save` (even though I think `save` is a bit lower-level, and harder to understand than `save_reduce`).\r\n\r\nAll our examples in `py_utils` that use `pklregister` also use `save_reduce` in the last step, so my reduction for SpaCy is meant to be added there, and not to be written by users (because SpaCy is very popular, so the official support by us makes sense :)).\r\n\r\nAnd in the tests, let's ignore the reconstruction part of pickle/dill, because it's not important for us, and focus on the generated dumps. What do you think?",
"@mariosasko What exactly do you mean with \"isn't part of the public API\"? It is [a public method](https://github.com/python/cpython/blob/24af9a40a8f85af813ea89998aa4e931fcc78cd9/Lib/pickle.py#L535) in base pickle, just like `dump` is but maybe you mean something else.",
"@BramVanroy Oh sorry, it's public (not prefixed with `\"_\"`) but it's not documented in the docs. `save_reduce` is also not in the docs, but its signature/functionality is similar to `copyreg.pickle` and I see it more often being used in the projects on GH, so it's seems \"more public\" to me. ",
"Unfortunately I feel that pickle in general is under-documented. 😄 \r\n\r\nFor the documentation, I can add a brief example, maybe under \"How-to Guides\"? The only thing that isn't immediately obvious to me is how I can add that doc page to the TOC?",
"Yes great idea ! To add that doc page to the TOC, you just have to add it to the index.rst file in the \"How-to guides\" TOC section",
"@mariosasko @lhoestq Feel free to make any edits or suggestions in the text!",
"Hi @mariosasko. I wish you'd told me sooner, as I spent quite some time writing on this.\r\n\r\nI'm also not sure whether it is too advanced to have in the documentation. The spaCy use-case seems potentially frequent. Or do you wish to add that case to the defaults, and whenever new issues come up that seem like frequent/obvious cases, add those internally as well?",
"Documenting the internal `pklregister` is overkill IMO (and it can be kept in docstrings), but we can document something higher level like `register_hash_func` once it's implemented.\r\n\r\nSo we keep the nice documentation you've written (thank you!), except we can rename it to \"Advanced caching\" and show an API that is similar to\r\n```python\r\n>>> @register_hash_func(Language, allow_subclasses=True)\r\n>>> def hash_spacy_language(nlp: Language):\r\n>>> return (nlp.to_bytes(),)\r\n```\r\nThis way we keep the documentation centered around the public API rather than the internals that may evolve/be too complicated to fit only one section.\r\n\r\n> Or do you wish to add that case to the defaults, and whenever new issues come up that seem like frequent/obvious cases, add those internally as well?\r\n\r\nLet's add it to the defaults since it's a frequent use-case. And also allow users to control the hashing using the API mentioned above if they face other non-trivially-hashable objects",
"Sure, I can have a go at implementing spaCy as a built-in. Should it be included in the tests? (Therefore adding spaCy to the tests requirements.)\r\n\r\nNext, from your example, it seems that the return value of `register_hash_func` will be used in pickler.save automatically (calling pklregister a bit deeper). Any reason why it returns a tuple? I can work on this as well, if needed.",
"> Sure, I can have a go at implementing spaCy as a built-in. Should it be included in the tests? (Therefore adding spaCy to the tests requirements.)\r\n\r\nThat would be perfect !\r\n\r\n> Next, from your example, it seems that the return value of register_hash_func will be used in pickler.save automatically (calling pklregister a bit deeper). \r\n\r\nYes I think so. For example register_hash_func can call pklregister with the user's function, but wrapped to use pickler.save.\r\n\r\n> Any reason why it returns a tuple? I can work on this as well, if needed.\r\n\r\nIt can either return an arbitrary object or a tuple. I like it a bit better if it's a tuple, so users understand more easily how to make the function take into account more than one item for the hash. It's also consistent with the streamlit caching functions, that also require a tuple. No strong opinion on this though\r\n\r\nLet me know if I can help with anything",
"@lhoestq I do not have the time anymore to work on this. Can someone else pick this up?",
"Hi ! Sure someone else can continue this PR (either someone from HF, or other contributors can fork the PR).\r\nI think I can work on this next week or the week after, but if anyone wants to work on this earlier feel free to comment here :)"
] | 1,636,200,504,000 | 1,657,120,788,000 | null | CONTRIBUTOR | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3224",
"html_url": "https://github.com/huggingface/datasets/pull/3224",
"diff_url": "https://github.com/huggingface/datasets/pull/3224.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3224.patch",
"merged_at": null
} | This is a continuation of the now closed PR in https://github.com/huggingface/datasets/pull/3206. The discussion there has shaped a new approach to do this.
In this PR, behavior of `pklregister` and `Pickler` is extended. Earlier, users were already able to register custom pickle functions. That is useful if they have objects that are not easily picklable with default methods. When one registers a custom function to a type, an object of that type will be pickled with the given function by `Pickler` which looks up the type in its `dispatch` table. The downside of this method, and of `pickle` in general, is that it is limited to direct type-matching and does not allow sub-classes. In many, default, cases that is not an issue. But when you are using external libraries where classes (e.g. parsers, models) are sub-classed this is not ideal.
```python
from datasets.fingerprint import Hasher
from datasets.utils.py_utils import pklregister
class BaseParser:
pass
class EnglishParser(BaseParser):
pass
@pklregister(BaseParser)
def custom_pkl_func(pickler, obj):
print(f"Called the custom pickle function for type {type(obj)}!")
# do something with the obj and ultimately save with the pickler
base = BaseParser()
en = EnglishParser()
# Hasher.hash uses the Pickler behind the scenes
# `custom_pkl_func` called for base
Hasher.hash(base)
# `custom_pkl_func` not called for en :-(
Hasher.hash(en)
```
In the example above we'd want to sub-class `EnglishParser` to be handled in the same way as its super-class `BaseParser`. This PR solves that by allowing for a keyword-argument `allow_subclasses` in `pklregister` (default: `False`).
```python
@pklregister(BaseParser, allow_subclasses=True)
```
When this option is enabled, we not only save the function in `Pickler.dispatch` but also save it in a custom table `Pickler.subclass_dispatch` **which allows us to dynamically add sub-classes of that class to the real dispatch table**. Then, if we want to pickle an object `obj` with `Pickler.dump()` (which ultimately will call `Pickler.save()`) we _first_ check whether any of the object's super-classes exist in `Pickler.sublcass_dispatch` and get the related custom pickle function. If we find one, we add the type of `obj` alongside the function to `Pickler.dispatch`. All of this happens at the start of the call to `Pickler.save()`. _Only then_ dill.Pickler's `save` will be called, which in turn will call `pickle._Pickler.save` which handles everything. Here, the `Pickler.dispatch` table will be used to look up custom pickler functions - and it now also includes the function for `obj`, which was copied from its super-class, which we added at the very start of our custom `Pickler.save()`.
For edge cases and, especially, for testing, a contextmanager class `TempPickleRegistry` is included that resets the pickle registry on exit to its previous state.
```python
with TempPickleRegistry():
@pklregister(MyObjClass)
def pickle_registry_test_false(pickler, obj):
pickler.save(obj.fancy_method())
some_obj = MyObjClass()
dumps(some_obj)
# `MyObjClass` is in Pickler.dispatch
# ... `MyObjClass` is _not_ in Pickler.dispatch anymore
```
closes https://github.com/huggingface/datasets/issues/3178
To Do
====
- [x] Write tests
- [ ] Write documentation/examples? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3224/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3224/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3223 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3223/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3223/comments | https://api.github.com/repos/huggingface/datasets/issues/3223/events | https://github.com/huggingface/datasets/pull/3223 | 1,046,445,507 | PR_kwDODunzps4uLb1E | 3,223 | Update BibTeX entry | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,180,912,000 | 1,636,182,398,000 | 1,636,182,398,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3223",
"html_url": "https://github.com/huggingface/datasets/pull/3223",
"diff_url": "https://github.com/huggingface/datasets/pull/3223.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3223.patch",
"merged_at": 1636182398000
} | Update BibTeX entry. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3223/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3222 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3222/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3222/comments | https://api.github.com/repos/huggingface/datasets/issues/3222/events | https://github.com/huggingface/datasets/pull/3222 | 1,046,299,725 | PR_kwDODunzps4uK_uG | 3,222 | Add docs for audio processing | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"Nice ! love it this way. I guess you can set this PR to \"ready for review\" ?",
"I guess we can merge this one now :)"
] | 1,636,153,679,000 | 1,637,771,528,000 | 1,637,768,152,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3222",
"html_url": "https://github.com/huggingface/datasets/pull/3222",
"diff_url": "https://github.com/huggingface/datasets/pull/3222.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3222.patch",
"merged_at": 1637768152000
} | This PR adds documentation for the `Audio` feature. It describes:
- The difference between loading `path` and `audio`, as well as use-cases/best practices for each of them.
- Resampling audio files with `cast_column`, and then calling `ds[0]["audio"]` to automatically decode and resample to the desired sampling rate.
- Resampling with `map`.
Preview [here](https://52969-250213286-gh.circle-artifacts.com/0/docs/_build/html/audio_process.html), let me know if I'm missing anything! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3222/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3222/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3221 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3221/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3221/comments | https://api.github.com/repos/huggingface/datasets/issues/3221/events | https://github.com/huggingface/datasets/pull/3221 | 1,045,890,512 | PR_kwDODunzps4uJp4Z | 3,221 | Resolve data_files by split name | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Really cool!\r\nWhen splitting by folder, what do we use for validation set (\"valid\", \"validation\" or both)?",
"> When splitting by folder, what do we use for validation set (\"valid\", \"validation\" or both)?\r\n\r\nBoth are fine :) As soon as it has \"valid\" in it",
"Merging for now, if you have comments about the documentation we can address them in subsequent PRs :)",
"Thanks for the comments @stevhliu :) I just opened https://github.com/huggingface/datasets/pull/3233 to take them into account"
] | 1,636,121,255,000 | 1,636,379,540,000 | 1,636,134,598,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3221",
"html_url": "https://github.com/huggingface/datasets/pull/3221",
"diff_url": "https://github.com/huggingface/datasets/pull/3221.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3221.patch",
"merged_at": 1636134597000
} | As discussed in https://github.com/huggingface/datasets/issues/3027 we should automatically infer what file is supposed to go to what split automatically, based on filenames.
I added the support for different kinds of patterns, for both dataset repositories and local directories:
```
Input structure:
my_dataset_repository/
├── README.md
└── dataset.csv
Output patterns:
{"train": ["*"]}
```
```
Input structure:
my_dataset_repository/
├── README.md
├── train.csv
└── test.csv
my_dataset_repository/
├── README.md
└── data/
├── train.csv
└── test.csv
my_dataset_repository/
├── README.md
├── train_0.csv
├── train_1.csv
├── train_2.csv
├── train_3.csv
├── test_0.csv
└── test_1.csv
Output patterns:
{"train": ["*train*"], "test": ["*test*"]}
```
```
Input structure:
my_dataset_repository/
├── README.md
└── data/
├── train/
│ ├── shard_0.csv
│ ├── shard_1.csv
│ ├── shard_2.csv
│ └── shard_3.csv
└── test/
├── shard_0.csv
└── shard_1.csv
Output patterns:
{"train": ["*train*/*", "*train*/**/*"], "test": ["*test*/*", "*test*/**/*"]}
```
and also this pattern that allows to have custom split names, and that is the structure used by #3098 for `push_to_hub` (cc @LysandreJik ):
```
Input structure:
my_dataset_repository/
├── README.md
└── data/
├── train-00000-of-00003.csv
├── train-00001-of-00003.csv
├── train-00002-of-00003.csv
├── test-00000-of-00001.csv
├── random-00000-of-00003.csv
├── random-00001-of-00003.csv
└── random-00002-of-00003.csv
Output patterns:
{
"train": ["data/train-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].*"],
"test": ["data/test-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].*"],
"random": ["data/random-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].*"],
}
```
You can check the documentation about structuring your repository [here](https://52640-250213286-gh.circle-artifacts.com/0/docs/_build/html/repository_structure.html). cc @stevhliu
Fix https://github.com/huggingface/datasets/issues/3027
Fix https://github.com/huggingface/datasets/issues/3212
In the future we can also add support for dataset configurations. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3221/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3220 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3220/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3220/comments | https://api.github.com/repos/huggingface/datasets/issues/3220/events | https://github.com/huggingface/datasets/issues/3220 | 1,045,549,029 | I_kwDODunzps4-Uc_l | 3,220 | Add documentation about dataset viewer feature | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,636,099,879,000 | 1,636,099,879,000 | null | MEMBER | null | null | null | Add to the docs more details about the dataset viewer feature in the Hub.
CC: @julien-c
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3220/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3220/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3219 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3219/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3219/comments | https://api.github.com/repos/huggingface/datasets/issues/3219/events | https://github.com/huggingface/datasets/issues/3219 | 1,045,095,000 | I_kwDODunzps4-SuJY | 3,219 | Eventual Invalid Token Error at setup of private datasets | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,636,051,845,000 | 1,636,377,786,000 | 1,636,361,983,000 | MEMBER | null | null | null | ## Describe the bug
From time to time, there appear Invalid Token errors with private datasets:
- https://app.circleci.com/pipelines/github/huggingface/datasets/8520/workflows/d44629f2-4749-40f8-a657-50931d0b3434/jobs/52534
```
____________ ERROR at setup of test_load_streaming_private_dataset _____________
ValueError: Invalid token passed!
____ ERROR at setup of test_load_streaming_private_dataset_with_zipped_data ____
ValueError: Invalid token passed!
=========================== short test summary info ============================
ERROR tests/test_load.py::test_load_streaming_private_dataset - ValueError: I...
ERROR tests/test_load.py::test_load_streaming_private_dataset_with_zipped_data
```
- https://app.circleci.com/pipelines/github/huggingface/datasets/8557/workflows/a8383181-ba6d-4487-9d0a-f750b6dcb936/jobs/52763
```
____ ERROR at setup of test_load_streaming_private_dataset_with_zipped_data ____
[gw1] linux -- Python 3.6.15 /home/circleci/.pyenv/versions/3.6.15/bin/python3.6
hf_api = <huggingface_hub.hf_api.HfApi object at 0x7f4899bab908>
hf_token = 'vgNbyuaLNEBuGbgCEtSBCOcPjZnngJufHkTaZvHwkXKGkHpjBPwmLQuJVXRxBuaRzNlGjlMpYRPbthfHPFWXaaEDTLiqTTecYENxukRYVAAdpeApIUPxcgsowadkTkPj'
zip_csv_path = PosixPath('/tmp/pytest-of-circleci/pytest-0/popen-gw1/data16/dataset.csv.zip')
@pytest.fixture(scope="session")
def hf_private_dataset_repo_zipped_txt_data_(hf_api: HfApi, hf_token, zip_csv_path):
repo_name = "repo_zipped_txt_data-{}".format(int(time.time() * 10e3))
hf_api.create_repo(token=hf_token, name=repo_name, repo_type="dataset", private=True)
repo_id = f"{USER}/{repo_name}"
hf_api.upload_file(
token=hf_token,
path_or_fileobj=str(zip_csv_path),
path_in_repo="data.zip",
repo_id=repo_id,
> repo_type="dataset",
)
tests/hub_fixtures.py:68:
...
ValueError: Invalid token passed!
=========================== short test summary info ============================
ERROR tests/test_load.py::test_load_streaming_private_dataset_with_zipped_data
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3219/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3219/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3218 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3218/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3218/comments | https://api.github.com/repos/huggingface/datasets/issues/3218/events | https://github.com/huggingface/datasets/pull/3218 | 1,045,032,313 | PR_kwDODunzps4uG2UA | 3,218 | Fix code quality in riddle_sense dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,047,800,000 | 1,636,048,203,000 | 1,636,048,202,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3218",
"html_url": "https://github.com/huggingface/datasets/pull/3218",
"diff_url": "https://github.com/huggingface/datasets/pull/3218.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3218.patch",
"merged_at": 1636048202000
} | Fix trailing whitespace.
Fix #3217. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3218/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3218/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3217 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3217/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3217/comments | https://api.github.com/repos/huggingface/datasets/issues/3217/events | https://github.com/huggingface/datasets/issues/3217 | 1,045,029,710 | I_kwDODunzps4-SeNO | 3,217 | Fix code quality bug in riddle_sense dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"To give more context: https://github.com/psf/black/issues/318. `black` doesn't treat this as a bug, but `flake8` does. \r\n"
] | 1,636,047,632,000 | 1,636,048,202,000 | 1,636,048,202,000 | MEMBER | null | null | null | ## Describe the bug
```
datasets/riddle_sense/riddle_sense.py:36:21: W291 trailing whitespace
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3217/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3216 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3216/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3216/comments | https://api.github.com/repos/huggingface/datasets/issues/3216/events | https://github.com/huggingface/datasets/pull/3216 | 1,045,027,733 | PR_kwDODunzps4uG1YS | 3,216 | Pin version exclusion for tensorflow incompatible with keras | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,047,486,000 | 1,636,109,858,000 | 1,636,109,857,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3216",
"html_url": "https://github.com/huggingface/datasets/pull/3216",
"diff_url": "https://github.com/huggingface/datasets/pull/3216.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3216.patch",
"merged_at": 1636109857000
} | Once `tensorflow` version 2.6.2 is released:
- https://github.com/tensorflow/tensorflow/commit/c1867f3bfdd1042f694df7a9870be51ba80543cb
- https://pypi.org/project/tensorflow/2.6.2/
with the patch:
- tensorflow/tensorflow#52927
we can remove the temporary fix we introduced in:
- #3208
Fix #3209. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3216/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3216/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3215 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3215/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3215/comments | https://api.github.com/repos/huggingface/datasets/issues/3215/events | https://github.com/huggingface/datasets/pull/3215 | 1,045,011,207 | PR_kwDODunzps4uGx4o | 3,215 | Small updates to to_tf_dataset documentation | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@stevhliu Accepted both suggestions, thanks for the review!"
] | 1,636,046,521,000 | 1,636,052,138,000 | 1,636,052,137,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3215",
"html_url": "https://github.com/huggingface/datasets/pull/3215",
"diff_url": "https://github.com/huggingface/datasets/pull/3215.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3215.patch",
"merged_at": 1636052137000
} | I added a little more description about `to_tf_dataset` compared to just setting the format | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3215/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3215/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3214 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3214/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3214/comments | https://api.github.com/repos/huggingface/datasets/issues/3214/events | https://github.com/huggingface/datasets/issues/3214 | 1,044,924,050 | I_kwDODunzps4-SEaS | 3,214 | Add ACAV100M Dataset | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] | open | false | null | [] | null | [] | 1,636,041,598,000 | 1,638,964,830,000 | null | CONTRIBUTOR | null | null | null | ## Adding a Dataset
- **Name:** *ACAV100M*
- **Description:** *contains 100 million videos with high audio-visual correspondence, ideal for self-supervised video representation learning.*
- **Paper:** *https://arxiv.org/abs/2101.10803*
- **Data:** *https://github.com/sangho-vision/acav100m*
- **Motivation:** *The largest dataset (to date) for audio-visual learning.*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3214/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3214/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3213 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3213/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3213/comments | https://api.github.com/repos/huggingface/datasets/issues/3213/events | https://github.com/huggingface/datasets/pull/3213 | 1,044,745,313 | PR_kwDODunzps4uF6W9 | 3,213 | Fix tuple_ie download url | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,031,347,000 | 1,636,121,766,000 | 1,636,121,765,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3213",
"html_url": "https://github.com/huggingface/datasets/pull/3213",
"diff_url": "https://github.com/huggingface/datasets/pull/3213.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3213.patch",
"merged_at": 1636121765000
} | Fix #3204 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3213/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3213/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3212 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3212/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3212/comments | https://api.github.com/repos/huggingface/datasets/issues/3212/events | https://github.com/huggingface/datasets/issues/3212 | 1,044,640,967 | I_kwDODunzps4-Q_TH | 3,212 | Sort files before loading | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"This will be fixed by https://github.com/huggingface/datasets/pull/3221"
] | 1,636,024,111,000 | 1,636,134,598,000 | 1,636,134,598,000 | MEMBER | null | null | null | When loading a dataset that consists of several files (e.g. `my_data/data_001.json`, `my_data/data_002.json` etc.) they are not loaded in order when using `load_dataset("my_data")`.
This could lead to counter-intuitive results if, for example, the data files are sorted by date or similar since they would appear in different order in the `Dataset`.
The straightforward solution is to sort the list of files alphabetically before loading them.
cc @lhoestq
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3212/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3212/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3211 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3211/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3211/comments | https://api.github.com/repos/huggingface/datasets/issues/3211/events | https://github.com/huggingface/datasets/pull/3211 | 1,044,617,913 | PR_kwDODunzps4uFkBx | 3,211 | Fix disable_nullable default value to False | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,023,126,000 | 1,636,024,101,000 | 1,636,024,100,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3211",
"html_url": "https://github.com/huggingface/datasets/pull/3211",
"diff_url": "https://github.com/huggingface/datasets/pull/3211.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3211.patch",
"merged_at": 1636024100000
} | Currently the `disable_nullable` parameter is not consistent across all dataset transforms. For example it is `False` in `map` but `True` in `flatten_indices`.
This creates unexpected behaviors like this
```python
from datasets import Dataset, concatenate_datasets
d1 = Dataset.from_dict({"a": [0, 1, 2, 3]})
d2 = d1.filter(lambda x: x["a"] < 2).flatten_indices()
d1.data.schema == d2.data.schema # False
```
This can cause issues when concatenating datasets for example.
For consistency I set `disable_nullable` to `False` in `flatten_indices` and I fixed some docstrings
cc @SBrandeis | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3211/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3211/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3210 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3210/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3210/comments | https://api.github.com/repos/huggingface/datasets/issues/3210/events | https://github.com/huggingface/datasets/issues/3210 | 1,044,611,471 | I_kwDODunzps4-Q4GP | 3,210 | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.15.1/datasets/wmt16/wmt16.py | {
"login": "xiuzhilu",
"id": 28184983,
"node_id": "MDQ6VXNlcjI4MTg0OTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/28184983?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiuzhilu",
"html_url": "https://github.com/xiuzhilu",
"followers_url": "https://api.github.com/users/xiuzhilu/followers",
"following_url": "https://api.github.com/users/xiuzhilu/following{/other_user}",
"gists_url": "https://api.github.com/users/xiuzhilu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiuzhilu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiuzhilu/subscriptions",
"organizations_url": "https://api.github.com/users/xiuzhilu/orgs",
"repos_url": "https://api.github.com/users/xiuzhilu/repos",
"events_url": "https://api.github.com/users/xiuzhilu/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiuzhilu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"Hi ! Do you have some kind of proxy in your browser that gives you access to internet ?\r\n\r\nMaybe you're having this error because you don't have access to this URL from python ?",
"Hi,do you fixed this error?\r\nI still have this issue when use \"use_auth_token=True\"",
"You don't need authentication to access those github hosted files\r\nPlease check that you can access this URL from your browser and also from your terminal"
] | 1,636,022,846,000 | 1,648,628,795,000 | 1,648,628,795,000 | NONE | null | null | null | when I use python examples/pytorch/translation/run_translation.py --model_name_or_path examples/pytorch/translation/opus-mt-en-ro --do_train --do_eval --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config_name ro-en --output_dir /tmp/tst-translation --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate to finetune translation model on huggingface, I get the issue"ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.15.1/datasets/wmt16/wmt16.py".But I can open the https://raw.githubusercontent.com/huggingface/datasets/1.15.1/datasets/wmt16/wmt16.py by using website. What should I do to solve the issue? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3210/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3210/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3209 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3209/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3209/comments | https://api.github.com/repos/huggingface/datasets/issues/3209/events | https://github.com/huggingface/datasets/issues/3209 | 1,044,505,771 | I_kwDODunzps4-QeSr | 3,209 | Unpin keras once TF fixes its release | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,017,332,000 | 1,636,109,857,000 | 1,636,109,857,000 | MEMBER | null | null | null | Related to:
- #3208 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3209/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3209/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3208 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3208/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3208/comments | https://api.github.com/repos/huggingface/datasets/issues/3208/events | https://github.com/huggingface/datasets/pull/3208 | 1,044,504,093 | PR_kwDODunzps4uFTIs | 3,208 | Pin keras version until TF fixes its release | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,017,212,000 | 1,636,018,255,000 | 1,636,018,254,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3208",
"html_url": "https://github.com/huggingface/datasets/pull/3208",
"diff_url": "https://github.com/huggingface/datasets/pull/3208.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3208.patch",
"merged_at": 1636018254000
} | Fix #3207. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3208/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3208/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3207 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3207/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3207/comments | https://api.github.com/repos/huggingface/datasets/issues/3207/events | https://github.com/huggingface/datasets/issues/3207 | 1,044,496,389 | I_kwDODunzps4-QcAF | 3,207 | CI error: Another metric with the same name already exists in Keras 2.7.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,636,016,651,000 | 1,636,018,254,000 | 1,636,018,254,000 | MEMBER | null | null | null | ## Describe the bug
Release of TensorFlow 2.7.0 contains an incompatibility with Keras. See:
- keras-team/keras#15579
This breaks our CI test suite: https://app.circleci.com/pipelines/github/huggingface/datasets/8493/workflows/055c7ae2-43bc-49b4-9f11-8fc71f35a25c/jobs/52363
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3207/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3207/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3206 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3206/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3206/comments | https://api.github.com/repos/huggingface/datasets/issues/3206/events | https://github.com/huggingface/datasets/pull/3206 | 1,044,216,270 | PR_kwDODunzps4uEZJe | 3,206 | [WIP] Allow user-defined hash functions via a registry | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @BramVanroy, thanks for your PR.\r\n\r\nThere was a bug in TensorFlow/Keras. We have made a temporary fix in master branch. Please, merge master into your PR branch, so that the CI tests pass.\r\n\r\n```\r\ngit checkout registry\r\ngit fetch upstream master\r\ngit merge upstream/master\r\n```",
"@albertvillanova Done. Although new tests will need to be added. I am looking for some feedback on my initial proposal in this PR. Reviews and ideas welcome!",
"Hi ! Thanks for diving into this :)\r\n\r\nWith this approach you get the right hash when doing `Hasher.hash(nlp)` but if you try to hash an object that has `nlp` as one of its attributes for example you will get different hashes every time.\r\n\r\nThis is because `Hasher.hash` is not recursive itself. Indeed what happens when you try to hash an object is that:\r\n1. it is dumped with our custom `dill` pickler (which is recursive)\r\n2. the bytes of the dump are hashed\r\n\r\nTo fix this we must integrate the custom hashing as a custom pickler dumping instead.\r\n\r\nNote that we're only using the `pickler.dumps` method and not `pickler.loads` since we only use it to get hashes, so it doesn't matter if `loads` doesn't reconstruct the object exactly. What's important it only to capture all the necessary information that defines how the object transforms the data (here `nlp.to_bytes()` determines how the spacy pipeline transforms the text).\r\n\r\nOur pickler already has a registry and you can register new dump functions with:\r\n```python\r\nimport dill\r\nimport spacy\r\nfrom datasets.utils.py_utils import pklregister\r\n\r\n@pklregister(spacy.Language)\r\ndef _save_spacy_language(pickler, nlp):\r\n pickler.save_reduce(...) # I think we can use nlp.to_bytes() here\r\n dill._dill.log.info(...)\r\n```\r\n\r\nYou can find some examples of custom dump functions in `py_utils.py`",
"Ah, darn it. Completely missed that register. Time wasted, unfortunately. \r\n\r\nTo better understand what you mean, I figured I'd try the basis of your snippet and I've noticed quite an annoying side-effect of how the pickle dispatch table seems to work. It explicitly uses an object's [`type()`](https://github.com/python/cpython/blob/87032cfa3dc975d7442fd57dea2c6a56d31c911a/Lib/pickle.py#L557-L558), which makes sense for pickling some (primitive) types it is not ideal for more complex ones, I think. `Hasher.hash` has the same issue as far as I can tell.\r\n\r\nhttps://github.com/huggingface/datasets/blob/d21ce54f2c2782f854f975eb1dc2be6f923b4314/src/datasets/fingerprint.py#L187-L191\r\n\r\nThis is very restrictive, and won't work for subclasses. In the case of spaCy, for instance, we register `Language`, but `nlp` is an instance of `English`, which is a _subclass_ of `Language`. These are different types, and so they will not match in the dispatch table. Maybe this is more general approach to cover such cases? Something like this is a start but too broad, but ideally a hierarchy is constructed and traversed of all classes in the table and the lowest class is selected to ensure that the most specific class function is dispatched.\r\n\r\n```python\r\n def hash(cls, value: Any) -> str:\r\n # Try to match the exact type\r\n if type(value) in cls.dispatch:\r\n return cls.dispatch[type(value)](cls, value)\r\n\r\n # Try to match instance (superclass)\r\n for type_cls, func in cls.dispatch.items():\r\n if isinstance(value, type_cls):\r\n return cls.dispatch[type_cls](cls, value)\r\n\r\n return cls.hash_default(value)\r\n```\r\n\r\nThis does not solve the problem for pickling, though. That is quite unfortunate IMO because that implies that users always have to specify the most specific class, which is not always obvious. (For instance, `spacy.load`'s signature returns `Language`, but as said before a subclass might be returned.)\r\n\r\nSecond, I am trying to understand `save_reduce` but I can find very little documentation about it, only the source code which is quite cryptic. Can you explain it a bit? The required arguments are not very clear to me and there is no docstring.\r\n\r\n```python\r\n def save_reduce(self, func, args, state=None, listitems=None, dictitems=None, obj=None):\r\n```",
"Here is an example illustrating the problem with sub-classes.\r\n\r\n```python\r\nimport spacy\r\n\r\nfrom spacy import Language\r\nfrom spacy.lang.en import English\r\n\r\nfrom datasets.utils.py_utils import Pickler, pklregister\r\n\r\n# Only useful in the registry (matching with `nlp`)\r\n# if you swap it out for very specific `English`\r\n@pklregister(English)\r\ndef hash_spacy_language(pickler, nlp):\r\n pass\r\n\r\n\r\ndef main():\r\n print(Pickler.dispatch)\r\n nlp = spacy.load(\"en_core_web_sm\")\r\n print(f\"NLP type {type(nlp)} in dispatch table? \", type(nlp) in Pickler.dispatch)\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```",
"Indeed that's not ideal.\r\nMaybe we could integrate all the subclasses directly in `datasets`. That's simple to do but the catch is that if users have new subclasses of `Language` it won't work.\r\n\r\nOtherwise we can see how to make the API simpler for users by allowing subclasses\r\n```python\r\n# if you swap it out for very specific `English`\r\n@pklregister(Language, allow_subclasses=True)\r\ndef hash_spacy_language(pickler, nlp):\r\n pass\r\n```\r\n\r\nHere is an idea how to make this work, let me know what you think:\r\n\r\nWhen `Pickler.dumps` is called, it uses `Pickler.save_global` which is a method that is going to be called recursively on all the objects. We can customize this part, and make it work as we want when it encounters a subclass of `Language`.\r\n\r\nFor example when it encounters a subclass of `Language`, we can dynamically register the hashing function for the subclass (`English` for example) in `Pickler.save_global`, right before calling the actual `dill.Pickler.save_global(self, obj, name=name)`:\r\n```python\r\npklregister(type(obj))(hash_function_registered_for_parent_class)\r\ndill.Pickler.save_global(self, obj, name=name)\r\n```\r\n\r\nIn practice that means we can have an additional dispatch dictionary (similar to `Pickler.dispatch`) to store the hashing functions when `allow_subclasses=True`, and use this dictionary in `Pickler.save_global` to check if we need to use a hashing function registered with `allow_subclasses=True` and get `hash_function_registered_for_parent_class`.",
"If I understood you correctly, I do not think that that is enough because you are only doing this for a type and its direct parent class. You could do this for all superclasses (so traverse all ancestors and find the registered function for the first that is encountered). I can work on that, if you agree. The one thing that I am not sure about is how you want to create the secondary dispatch table. An empty dict as class variable in Pickler? (It doesn't have to be a true dispatcher, I think.)\r\n\r\nI do not think that dynamic registration is the ideal situation (it feels a bit hacky). An alternative would be to subclass Pickle and Dill to make sure that instead of just type() checking in the dispatch table also superclasses are considered. But that is probably overkill.",
"> You could do this for all superclasses (so traverse all ancestors and find the registered function for the first that is encountered)\r\n\r\nThat makes sense indeed !\r\n\r\n> The one thing that I am not sure about is how you want to create the secondary dispatch table. An empty dict as class variable in Pickler? (It doesn't have to be a true dispatcher, I think.)\r\n\r\nSure, let's try to not use too complicated stuff\r\n\r\n> I do not think that dynamic registration is the ideal situation (it feels a bit hacky). An alternative would be to subclass Pickle and Dill to make sure that instead of just type() checking in the dispatch table also superclasses are considered. But that is probably overkill.\r\n\r\nIndeed that would feel less hacky, but maybe it's too complex just for this. I feel like this part of the library is already hard to understand when you're not familiar with pickle. IMO having only a few changes that are simpler to understand is better than having a rewrite of `dill`'s core code.\r\n\r\nThanks a lot for your insights, it looks like we're going to have something that works well and that unlocks some nice flexibility for users :) Feel free to ping me anytime if I can help on this",
"Sure, thanks for brainstorming! I'll try to work on it this weekend. Will also revert the current changes in this PR and rename it. ",
"It seems like this is going in the right direction :). \r\n\r\n@BramVanroy Just one small suggestion for future contributions: instead of using `WIP` in the PR title, you can create a draft PR if you're still working on it.",
"Maybe I should just create a new (draft) PR then, seeing that I'll have to rename and revert the changes anyway? I'll link to this PR so that the discussion is at least referenced.",
"I can convert this PR to a draft PR. Let me know what would you prefer.",
"I think reverting my previous commits would make for a dirty (or confusing) commit history, so I'll just create a new one. Thanks."
] | 1,635,981,942,000 | 1,636,115,891,000 | 1,636,115,884,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3206",
"html_url": "https://github.com/huggingface/datasets/pull/3206",
"diff_url": "https://github.com/huggingface/datasets/pull/3206.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3206.patch",
"merged_at": null
} | Inspired by the discussion on hashing in https://github.com/huggingface/datasets/issues/3178#issuecomment-959016329, @lhoestq suggested that it would be neat to allow users more control over the hashing process. Specifically, it would be great if users can specify specific hashing functions depending on the **class** of the object.
As an example, we found in the linked topic that loaded spaCy models (`Language` objects) have different hashes when `dump`'d, but their byte representation with `Language.to_bytes()` _is_ deterministic. It would therefore be great if we could specify that for `Language` objects, the hasher should hash the objects `to_bytes()` return value instead of the object itself.
This PR adds a new, but tiny, dependency to manage the registry, namely [`catalogue`](https://github.com/explosion/catalogue).
Two files have been changed (apart from the added dependency in `setup.py`) and one file has been added.
**utils.registry** (added)
This file defines our custom Registry and builds a registry called "hashers". A Registry is basically dictionary from names (str) to functions. A function can be added to the registry by a decorator, e.g.
```python
@hashers.register(spacy.Language)
def hash_spacy_language(nlp):
return Hasher.hash(nlp.to_bytes())
```
You'll notice that `spacy.Language` is not a string, even though the registry holds a str->func mapping. To accomplish this with classes in a dynamic way, catalogue.Registry needed to be subclassed and modified as `DatasetsRegistry`. All methods that use a name as an input are now modified so that classes are deterministically converted in strings in such a way that we can later retrieve the actual class from the string (below).
**utils.py_utils** (modified)
Added two functions to deal with classes and their qualified names, that is, their full descriptive name including the module. On the one hand it allows us to retrieve a string from a given class, e.g. given `Module` class, return `torch.nn.Module` str. Conversly, a function is added to convert such a full qualified name into a class. For instance, given the string `torch.nn.Module`, return the `Module` class. These straightforward methods allow us to interchangeably use classes and strings without any needed user interaction - they can just register a class, and behind the scenes `DatasetsRegistry` converts these to deterministic strings.
**fingerprint** (modified)
Updated Hasher.hash so that if the object to hash is an instance of a class in the registry, the registered function is used to hash the object instead of the default behavior. To do so we iterate over the registry `hashers` and convert its keys (strings) into classes, and then we can use `isinstance`.
```python
# Check if the current object is an instance that is
# applicable to the user-defined hashers. If so, hash
# with the user-defined function
for full_module_name, func in hashers.get_all().items():
registered_cls = get_cls_from_qualname(full_module_name)
if isinstance(value, registered_cls):
return func(value)
```
**Putting it all together**
To test this, you can try the following example with spaCy. First install spaCy from source and checkout a specific commit.
```shell
git clone https://github.com/explosion/spaCy.git
cd spaCy/
git checkout cab9209c3dfcd1b75dfe5657f10e52c4d847a3cf
cd ..
git clone https://github.com/BramVanroy/datasets.git
cd datasets
git checkout registry
pip install -e .
pip install ../spaCy
spacy download en_core_web_sm
```
Now you can run the following script. By default it will use the custom hasher function for the Language object. You can enable the default behavior by commenting out `@hashers.register...`.
```python
import spacy
from datasets.fingerprint import Hasher
from datasets.utils.registry import hashers
# Register a function so that when the Hasher encounters a spacy.Language object
# it uses this custom function to hash instead of the default
@hashers.register(spacy.Language)
def hash_spacy_language(nlp):
return Hasher.hash(nlp.to_bytes())
def main():
print(hashers.get_all())
nlp = spacy.load("en_core_web_sm")
dump1 = Hasher.hash(nlp)
nlp = spacy.load("en_core_web_sm")
dump2 = Hasher.hash(nlp)
print(dump1)
# succeeds when using the registered custom function
# fails if using the default
assert dump1 == dump2
if __name__ == '__main__':
main()
```
To do
====
- The above is just a proof-of-concept. I am open to changes/suggestions
- Tests still need to be written
- We should consider whether we can make `DatasetsRegistry` very restrictive and ONLY allowing classes. That would make testing easier - otherwise we also need to test for other sorts of objects.
- Maybe the `hashers` definition is better suited in `fingerprint`?
- Documentation/examples need to be updated
- Not sure why the logger is not working in `hash()`
- `get_cls_from_qualname` might need a fail-safe: is it possible for a full_qualname to not have a module, and if so how do we deal with that?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3206/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3206/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3205 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3205/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3205/comments | https://api.github.com/repos/huggingface/datasets/issues/3205/events | https://github.com/huggingface/datasets/pull/3205 | 1,044,099,561 | PR_kwDODunzps4uEAlw | 3,205 | Add Multidoc2dial Dataset | {
"login": "sivasankalpp",
"id": 7344617,
"node_id": "MDQ6VXNlcjczNDQ2MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7344617?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sivasankalpp",
"html_url": "https://github.com/sivasankalpp",
"followers_url": "https://api.github.com/users/sivasankalpp/followers",
"following_url": "https://api.github.com/users/sivasankalpp/following{/other_user}",
"gists_url": "https://api.github.com/users/sivasankalpp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sivasankalpp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sivasankalpp/subscriptions",
"organizations_url": "https://api.github.com/users/sivasankalpp/orgs",
"repos_url": "https://api.github.com/users/sivasankalpp/repos",
"events_url": "https://api.github.com/users/sivasankalpp/events{/privacy}",
"received_events_url": "https://api.github.com/users/sivasankalpp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@songfeng cc",
"Hi @sivasankalpp, thanks for your PR.\r\n\r\nThere was a bug in TensorFlow/Keras. We have made a temporary fix in our master branch. Please, merge master into your PR branch, so that the CI tests pass.\r\n\r\n```\r\ngit checkout multidoc2dial\r\ngit fetch upstream master\r\ngit merge upstream/master\r\n```",
"Hi @albertvillanova, I have merged master into my PR branch. All tests are passing. \r\nPlease take a look when you get a chance, thanks! \r\n",
"Thanks for your feedback @lhoestq. We addressed your comments in the latest commit. Let us know if everything looks okay :) "
] | 1,635,972,511,000 | 1,637,775,169,000 | 1,637,772,908,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3205",
"html_url": "https://github.com/huggingface/datasets/pull/3205",
"diff_url": "https://github.com/huggingface/datasets/pull/3205.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3205.patch",
"merged_at": 1637772908000
} | This PR adds the MultiDoc2Dial dataset introduced in this [paper](https://arxiv.org/pdf/2109.12595v1.pdf ) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3205/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3205/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3204 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3204/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3204/comments | https://api.github.com/repos/huggingface/datasets/issues/3204/events | https://github.com/huggingface/datasets/issues/3204 | 1,043,707,307 | I_kwDODunzps4-NbWr | 3,204 | FileNotFoundError for TupleIE dataste | {
"login": "arda-vianai",
"id": 75334917,
"node_id": "MDQ6VXNlcjc1MzM0OTE3",
"avatar_url": "https://avatars.githubusercontent.com/u/75334917?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arda-vianai",
"html_url": "https://github.com/arda-vianai",
"followers_url": "https://api.github.com/users/arda-vianai/followers",
"following_url": "https://api.github.com/users/arda-vianai/following{/other_user}",
"gists_url": "https://api.github.com/users/arda-vianai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arda-vianai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arda-vianai/subscriptions",
"organizations_url": "https://api.github.com/users/arda-vianai/orgs",
"repos_url": "https://api.github.com/users/arda-vianai/repos",
"events_url": "https://api.github.com/users/arda-vianai/events{/privacy}",
"received_events_url": "https://api.github.com/users/arda-vianai/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@mariosasko @lhoestq Could you give me an update on how to load the dataset after the fix?\r\nThanks.",
"Hi @arda-vianai,\r\n\r\nfirst, you can try:\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset('tuple_ie', 'all', revision=\"master\")\r\n```\r\nIf this doesn't work, your version of `datasets` is missing some features that are required to run the dataset script, so install the master version with the following command:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```\r\nand then:\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset('tuple_ie', 'all')\r\n```\r\nshould work (even without `revision`).",
"@mariosasko \r\nThanks, it is working now. I actually did that before but I didn't restart the kernel. I restarted it and it works now. My bad!!!\r\nMany thanks and great job!\r\n-arda"
] | 1,635,951,415,000 | 1,636,127,475,000 | 1,636,121,765,000 | NONE | null | null | null | Hi,
`dataset = datasets.load_dataset('tuple_ie', 'all')`
returns a FileNotFound error. Is the data not available?
Many thanks.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3204/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3204/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3203 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3203/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3203/comments | https://api.github.com/repos/huggingface/datasets/issues/3203/events | https://github.com/huggingface/datasets/pull/3203 | 1,043,552,766 | PR_kwDODunzps4uCNoT | 3,203 | Updated: DaNE - updated URL for download | {
"login": "MalteHB",
"id": 47593213,
"node_id": "MDQ6VXNlcjQ3NTkzMjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/47593213?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MalteHB",
"html_url": "https://github.com/MalteHB",
"followers_url": "https://api.github.com/users/MalteHB/followers",
"following_url": "https://api.github.com/users/MalteHB/following{/other_user}",
"gists_url": "https://api.github.com/users/MalteHB/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MalteHB/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MalteHB/subscriptions",
"organizations_url": "https://api.github.com/users/MalteHB/orgs",
"repos_url": "https://api.github.com/users/MalteHB/repos",
"events_url": "https://api.github.com/users/MalteHB/events{/privacy}",
"received_events_url": "https://api.github.com/users/MalteHB/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Actually it looks like the old URL is still working, and it's also the one that is mentioned in https://github.com/alexandrainst/danlp/blob/master/docs/docs/datasets.md\r\n\r\nWhat makes you think we should use the new URL ?",
"@lhoestq Sorry! I might have jumped to conclusions a bit too fast here... \r\n\r\nI was working in Google Colab and got an error that it was unable to use the URL. I then forked the project, updated the URL, ran it locally and it worked. I therefore assumed that my URL update fixed the issue, however, I see now that it might rather be a Google Colab issue... \r\n\r\nStill - this seems to be the official URL for downloading the dataset, and I think that it will be most beneficial to use. :-) ",
"It looks like they're using these new urls for their new datasets. Maybe let's change to the new URL in case the old one stops working at one point. Thanks"
] | 1,635,944,113,000 | 1,636,031,676,000 | 1,636,026,403,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3203",
"html_url": "https://github.com/huggingface/datasets/pull/3203",
"diff_url": "https://github.com/huggingface/datasets/pull/3203.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3203.patch",
"merged_at": 1636026403000
} | It seems that DaNLP has updated their download URLs and it therefore also needs to be updated in here... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3203/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3202 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3202/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3202/comments | https://api.github.com/repos/huggingface/datasets/issues/3202/events | https://github.com/huggingface/datasets/issues/3202 | 1,043,213,660 | I_kwDODunzps4-Li1c | 3,202 | Add mIoU metric | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Resolved via https://github.com/huggingface/datasets/pull/3745."
] | 1,635,928,952,000 | 1,654,105,145,000 | 1,654,105,144,000 | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
Recently, some semantic segmentation models were added to HuggingFace Transformers, including [SegFormer](https://huggingface.co/transformers/model_doc/segformer.html) and [BEiT](https://huggingface.co/transformers/model_doc/beit.html).
Semantic segmentation (which is the task of labeling every pixel of an image with a corresponding class) is typically evaluated using the Mean Intersection and Union (mIoU). Together with the upcoming Image Feature, adding this metric could be very handy when creating example scripts to fine-tune any Transformer-based model on a semantic segmentation dataset.
An implementation can be found [here](https://github.com/open-mmlab/mmsegmentation/blob/504965184c3e6bc9ec43af54237129ef21981a5f/mmseg/core/evaluation/metrics.py#L132) for instance.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3202/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3201/comments | https://api.github.com/repos/huggingface/datasets/issues/3201/events | https://github.com/huggingface/datasets/issues/3201 | 1,043,209,142 | I_kwDODunzps4-Lhu2 | 3,201 | Add GSM8K dataset | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"Closed via https://github.com/huggingface/datasets/pull/4103"
] | 1,635,928,604,000 | 1,649,850,972,000 | 1,649,850,971,000 | CONTRIBUTOR | null | null | null | ## Adding a Dataset
- **Name:** GSM8K (short for Grade School Math 8k)
- **Description:** GSM8K is a dataset of 8.5K high quality linguistically diverse grade school math word problems created by human problem writers.
- **Paper:** https://openai.com/blog/grade-school-math/
- **Data:** https://github.com/openai/grade-school-math
- **Motivation:** The dataset is useful to investigate the reasoning abilities of large Transformer models, such as GPT-3.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3201/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3200 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3200/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3200/comments | https://api.github.com/repos/huggingface/datasets/issues/3200/events | https://github.com/huggingface/datasets/pull/3200 | 1,042,887,291 | PR_kwDODunzps4uAZLu | 3,200 | Catch token invalid error in CI | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,635,890,186,000 | 1,635,932,468,000 | 1,635,932,468,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3200",
"html_url": "https://github.com/huggingface/datasets/pull/3200",
"diff_url": "https://github.com/huggingface/datasets/pull/3200.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3200.patch",
"merged_at": 1635932468000
} | The staging back end sometimes returns invalid token errors when trying to delete a repo.
I modified the fixture in the test that uses staging to ignore this error | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3200/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3199/comments | https://api.github.com/repos/huggingface/datasets/issues/3199/events | https://github.com/huggingface/datasets/pull/3199 | 1,042,860,935 | PR_kwDODunzps4uAVzQ | 3,199 | Bump huggingface_hub | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,635,888,550,000 | 1,636,854,491,000 | 1,635,889,300,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3199",
"html_url": "https://github.com/huggingface/datasets/pull/3199",
"diff_url": "https://github.com/huggingface/datasets/pull/3199.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3199.patch",
"merged_at": 1635889300000
} | huggingface_hub just released its first minor version, so we need to update the dependency
It was supposed to be part of 1.15.0 but I'm adding it for 1.15.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3199/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3198 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3198/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3198/comments | https://api.github.com/repos/huggingface/datasets/issues/3198/events | https://github.com/huggingface/datasets/pull/3198 | 1,042,679,548 | PR_kwDODunzps4t_5G8 | 3,198 | Add Multi-Lingual LibriSpeech | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,635,877,439,000 | 1,636,045,762,000 | 1,636,045,762,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3198",
"html_url": "https://github.com/huggingface/datasets/pull/3198",
"diff_url": "https://github.com/huggingface/datasets/pull/3198.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3198.patch",
"merged_at": 1636045762000
} | Add https://www.openslr.org/94/ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3198/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3197 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3197/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3197/comments | https://api.github.com/repos/huggingface/datasets/issues/3197/events | https://github.com/huggingface/datasets/pull/3197 | 1,042,541,127 | PR_kwDODunzps4t_cry | 3,197 | Fix optimized encoding for arrays | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,635,868,553,000 | 1,635,880,344,000 | 1,635,880,343,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3197",
"html_url": "https://github.com/huggingface/datasets/pull/3197",
"diff_url": "https://github.com/huggingface/datasets/pull/3197.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3197.patch",
"merged_at": 1635880343000
} | Hi !
#3124 introduced a regression that made the benchmarks CI fail because of a bad array comparison when checking the first encoded element. This PR fixes this by making sure that encoding is applied on all sequence types except lists.
cc @eladsegal fyi (no big deal) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3197/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3197/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3196/comments | https://api.github.com/repos/huggingface/datasets/issues/3196/events | https://github.com/huggingface/datasets/pull/3196 | 1,042,223,913 | PR_kwDODunzps4t-bxy | 3,196 | QOL improvements: auto-flatten_indices and desc in map calls | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,635,852,530,000 | 1,635,867,669,000 | 1,635,867,668,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3196",
"html_url": "https://github.com/huggingface/datasets/pull/3196",
"diff_url": "https://github.com/huggingface/datasets/pull/3196.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3196.patch",
"merged_at": 1635867668000
} | This PR:
* automatically calls `flatten_indices` where needed: in `unique` and `save_to_disk` to avoid saving the indices file
* adds descriptions to the map calls
Fix #3040 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3196/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3195 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3195/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3195/comments | https://api.github.com/repos/huggingface/datasets/issues/3195/events | https://github.com/huggingface/datasets/pull/3195 | 1,042,204,044 | PR_kwDODunzps4t-ZR0 | 3,195 | More robust `None` handling | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I also created a PR regarding `disable_nullable` that must be always `False` by default, in order to always allow None values\r\nhttps://github.com/huggingface/datasets/pull/3211",
"@lhoestq I addressed your comments, added tests, did some refactoring to make the implementation cleaner and added support for `None` values in `map` transforms when the feature type is `ArrayXD` (previously, I only implemented `None` decoding).\r\n\r\nMy only concern is that during decoding `ArrayXD` arrays with `None` values will be auto-casted to `float64` to allow `np.nan` insertion and this might be unexpected if `dtype` is not `float`, so one option would be to allow `None` values only if the storage type is `float32` or `float64`. Let me know WDYT would be the most consistent behavior here.",
"Cool ! :D\r\n> My only concern is that during decoding ArrayXD arrays with None values will be auto-casted to float64 to allow np.nan insertion and this might be unexpected if dtype is not float, so one option would be to allow None values only if the storage type is float32 or float64. Let me know WDYT would be the most consistent behavior here.\r\n\r\nYes that makes sense to only fill with nan if the type is compatible",
"After some more experimenting, I think we can keep auto-cast to float because PyArrow also does it:\r\n```python\r\nimport pyarrow as pa\r\narr = pa.array([1, 2, 3, 4, None], type=pa.int32()).to_numpy(zero_copy_only=False) # None present - int32 -> float64\r\nassert arr.dtype == np.float64\r\n```\r\nAdditional changes:\r\n* fixes a bug in the `_is_zero_copy_only` implementation for the ArraXD types. Previously, `_is_zero_copy_only` would always return False for these types. Still have to see if it's possible to optimize copying of the non-extension types (`Sequence`, ...), but I plan to work on that in a separate PR.\r\n* https://github.com/huggingface/datasets/pull/2891 introduced a bug where the dtype of `ArrayXD` wouldn't be preserved due to `to_pylist` call in NumPy Formatter (`np.array(np.array(..).tolist())` doesn't necessarily preserve dtype of the initial array), so I'm also fixing that. ",
"The CI fail for windows is unrelated to this PR, merging"
] | 1,635,851,710,000 | 1,639,060,020,000 | 1,639,060,018,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3195",
"html_url": "https://github.com/huggingface/datasets/pull/3195",
"diff_url": "https://github.com/huggingface/datasets/pull/3195.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3195.patch",
"merged_at": 1639060017000
} | PyArrow has explicit support for `null` values, so it makes sense to support Nones on our side as well.
[Colab Notebook with examples](https://colab.research.google.com/drive/1zcK8BnZYnRe3Ao2271u1T19ag9zLEiy3?usp=sharing)
Changes:
* allow None for the features types with special encoding (`ClassLabel, TranslationVariableLanguages, Value, _ArrayXD`)
* handle None in `class_encode_column` (also there is an option to stringify Nones and treat them as a class)
* support None sorting in `sort` (use pandas for that)
* handle None in align_labels_with_mapping
* support for None in ArrayXD (converts `None` to `np.nan` to align the behavior with PyArrow)
* support for None in the Audio/Image feature
* allow promotion when concatenating tables (`pa.concat_tables(table_list, promote=True)`) and `null` row/~~column~~ broadcasting similar to pandas
Additional notes:
* use `null` instead of `none` for function arguments for consistency with existing `disable_nullable`
* fixes a bug with the `update_metadata_with_features` call in `Dataset.rename_columns`
* had to update some tests, let me know if that's ok
TODO:
- [x] check how the Audio features behaves with Nones
- [x] Better None handling in `concatenate_datasets`/`add_item`
- [x] Fix formatting with Nones
- [x] Add Colab with examples
- [x] Tests
TODOs for subsequent PRs:
- Mention None handling in the docs
- Add `drop_null`/`fill_null` to `Dataset`/`DatasetDict`
Fix #3181 #3253 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3195/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3194/comments | https://api.github.com/repos/huggingface/datasets/issues/3194/events | https://github.com/huggingface/datasets/pull/3194 | 1,041,999,535 | PR_kwDODunzps4t91Eg | 3,194 | Update link to Datasets Tagging app in Spaces | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,635,840,830,000 | 1,636,367,783,000 | 1,636,367,782,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3194",
"html_url": "https://github.com/huggingface/datasets/pull/3194",
"diff_url": "https://github.com/huggingface/datasets/pull/3194.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3194.patch",
"merged_at": 1636367782000
} | Fix #3193. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3194/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3193/comments | https://api.github.com/repos/huggingface/datasets/issues/3193/events | https://github.com/huggingface/datasets/issues/3193 | 1,041,971,117 | I_kwDODunzps4-Gzet | 3,193 | Update link to datasets-tagging app | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,635,838,799,000 | 1,636,367,782,000 | 1,636,367,782,000 | MEMBER | null | null | null | Once datasets-tagging has been transferred to Spaces:
- huggingface/datasets-tagging#22
We should update the link in Datasets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3193/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3192 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3192/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3192/comments | https://api.github.com/repos/huggingface/datasets/issues/3192/events | https://github.com/huggingface/datasets/issues/3192 | 1,041,308,086 | I_kwDODunzps4-ERm2 | 3,192 | Multiprocessing filter/map (tests) not working on Windows | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,635,780,968,000 | 1,635,782,223,000 | null | CONTRIBUTOR | null | null | null | While running the tests, I found that the multiprocessing examples fail on Windows, or rather they do not complete: they cause a deadlock. I haven't dug deep into it, but they do not seem to work as-is. I currently have no time to tests this in detail but at least the tests seem not to run correctly (deadlocking).
## Steps to reproduce the bug
```shell
pytest tests/test_arrow_dataset.py -k "test_filter_multiprocessing"
pytest tests/test_arrow_dataset.py -k "test_map_multiprocessing"
```
## Expected results
The functionality to work on all platforms.
## Actual results
Deadlock.
## Environment info
- `datasets` version: 1.14.1.dev0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.9.2, also tested with 3.7.9
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3192/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3191 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3191/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3191/comments | https://api.github.com/repos/huggingface/datasets/issues/3191/events | https://github.com/huggingface/datasets/issues/3191 | 1,041,225,111 | I_kwDODunzps4-D9WX | 3,191 | Dataset viewer issue for '*compguesswhat*' | {
"login": "benotti",
"id": 2545336,
"node_id": "MDQ6VXNlcjI1NDUzMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2545336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benotti",
"html_url": "https://github.com/benotti",
"followers_url": "https://api.github.com/users/benotti/followers",
"following_url": "https://api.github.com/users/benotti/following{/other_user}",
"gists_url": "https://api.github.com/users/benotti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/benotti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benotti/subscriptions",
"organizations_url": "https://api.github.com/users/benotti/orgs",
"repos_url": "https://api.github.com/users/benotti/repos",
"events_url": "https://api.github.com/users/benotti/events{/privacy}",
"received_events_url": "https://api.github.com/users/benotti/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"```python\r\n>>> import datasets\r\n>>> dataset = datasets.load_dataset('compguesswhat', name='compguesswhat-original',split='train', streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 497, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 494, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 87, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/compguesswhat/4d08b9e0a8d1cf036c9626c93be4a759fdd9fcce050ea503ea14b075e830c799/compguesswhat.py\", line 251, in _generate_examples\r\n with gzip.open(filepath) as in_file:\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/gzip.py\", line 58, in open\r\n binary_file = GzipFile(filename, gz_mode, compresslevel)\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/gzip.py\", line 173, in __init__\r\n fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')\r\nFileNotFoundError: [Errno 2] No such file or directory: 'zip://compguesswhat-original/0.2.0/compguesswhat.train.jsonl.gz::https://www.dropbox.com/s/l0nc13udml6vs0w/compguesswhat-original.zip?dl=1'\r\n```\r\n\r\nIt's an issue with the streaming mode. Note that normal mode is used by the dataset viewer when streaming is failing, but only for the smallest datasets. This dataset is above the limit, hence the error.\r\n\r\nSame case as https://github.com/huggingface/datasets/issues/3186#issuecomment-1096549774.",
"cc @huggingface/datasets ",
"There is an issue with the URLs of their data files: https://www.dropbox.com/s/l0nc13udml6vs0w/compguesswhat-original.zip?dl=1\r\n> Dropbox Error: That didn't work for some reason\r\n\r\nError reported to their repo:\r\n- https://github.com/CompGuessWhat/compguesswhat.github.io/issues/1",
"Closed by:\r\n- #4968"
] | 1,635,776,209,000 | 1,662,969,749,000 | 1,662,969,749,000 | NONE | null | null | null | ## Dataset viewer issue for '*compguesswhat*'
**Link:** https://huggingface.co/datasets/compguesswhat
File not found
Am I the one who added this dataset ? No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3191/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3190 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3190/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3190/comments | https://api.github.com/repos/huggingface/datasets/issues/3190/events | https://github.com/huggingface/datasets/issues/3190 | 1,041,153,631 | I_kwDODunzps4-Dr5f | 3,190 | combination of shuffle and filter results in a bug | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"I cannot reproduce this on master and pyarrow==4.0.1.\r\n",
"Hi ! There was a regression in `datasets` 1.12 that introduced this bug. It has been fixed in #3019 in 1.13\r\n\r\nCan you try to update `datasets` and try again ?",
"Thanks a lot, fixes with 1.13"
] | 1,635,772,049,000 | 1,635,850,249,000 | 1,635,850,249,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
Hi,
I would like to shuffle a dataset, then filter it based on each existing label. however, the combination of `filter`, `shuffle` seems to results in a bug. In the minimal example below, as you see in the filtered results, the filtered labels are not unique, meaning filter has not worked. Any suggestions as a temporary fix is appreciated @lhoestq.
Thanks.
Best regards
Rabeeh
## Steps to reproduce the bug
```python
import numpy as np
import datasets
datasets = datasets.load_dataset('super_glue', 'rte', script_version="master")
shuffled_data = datasets["train"].shuffle(seed=42)
for label in range(2):
print("label ", label)
data = shuffled_data.filter(lambda example: int(example['label']) == label)
print("length ", len(data), np.unique(data['label']))
```
## Expected results
Filtering per label, should only return the data with that specific label.
## Actual results
As you can see, filtered data per label, has still two labels of [0, 1]
```
label 0
length 1249 [0 1]
label 1
length 1241 [0 1]
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: linux
- Python version: 3.7.11
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3190/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3189 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3189/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3189/comments | https://api.github.com/repos/huggingface/datasets/issues/3189/events | https://github.com/huggingface/datasets/issues/3189 | 1,041,044,986 | I_kwDODunzps4-DRX6 | 3,189 | conll2003 incorrect label explanation | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @BramVanroy,\r\n\r\nsince these fields are of type `ClassLabel` (you can check this with `dset.features`), you can inspect the possible values with:\r\n```python\r\ndset.features[field_name].feature.names # .feature because it's a sequence of labels\r\n```\r\n\r\nand to find the mapping between names and integers, use: \r\n```python\r\ndset.features[field_name].feature.int2str(value_or_values_list) # map integer value to string value\r\n# or\r\ndset.features[field_name].feature.str2int(value_or_values_list) # map string value to integer value\r\n```\r\n\r\n"
] | 1,635,764,610,000 | 1,636,454,458,000 | 1,636,454,458,000 | CONTRIBUTOR | null | null | null | In the [conll2003](https://huggingface.co/datasets/conll2003#data-fields) README, the labels are described as follows
> - `id`: a `string` feature.
> - `tokens`: a `list` of `string` features.
> - `pos_tags`: a `list` of classification labels, with possible values including `"` (0), `''` (1), `#` (2), `$` (3), `(` (4).
> - `chunk_tags`: a `list` of classification labels, with possible values including `O` (0), `B-ADJP` (1), `I-ADJP` (2), `B-ADVP` (3), `I-ADVP` (4).
> - `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-PER` (1), `I-PER` (2), `B-ORG` (3), `I-ORG` (4) `B-LOC` (5), `I-LOC` (6) `B-MISC` (7), `I-MISC` (8).
First of all, it would be great if we can get a list of ALL possible pos_tags.
Second, the chunk tags labels cannot be correct. The description says the values go from 0 to 4 whereas the data shows values from at least 11 to 21 and 0.
EDIT: not really a bug, sorry for mistagging. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3189/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3188 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3188/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3188/comments | https://api.github.com/repos/huggingface/datasets/issues/3188/events | https://github.com/huggingface/datasets/issues/3188 | 1,040,980,712 | I_kwDODunzps4-DBro | 3,188 | conll2002 issues | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting :)\r\n\r\nThis is related to https://github.com/huggingface/datasets/issues/2742, I'm working on it. It should fix the viewer for around 80 datasets.\r\n",
"Ah, hadn't seen that sorry.\r\n\r\nThe scrambled \"point of contact\" is a separate issue though, I think.",
"@lhoestq The \"point of contact\" is still an issue.",
"It will be fixed in https://github.com/huggingface/datasets/pull/3274, thanks"
] | 1,635,760,164,000 | 1,636,984,259,000 | 1,636,737,491,000 | CONTRIBUTOR | null | null | null | **Link:** https://huggingface.co/datasets/conll2002
The dataset viewer throws a server error when trying to preview the dataset.
```
Message: Extraction protocol 'train' for file at 'https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/esp.train' is not implemented yet
```
In addition, the "point of contact" has encoding issues and does not work when clicked.
Am I the one who added this dataset ? No, @lhoestq did | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3188/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3187 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3187/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3187/comments | https://api.github.com/repos/huggingface/datasets/issues/3187/events | https://github.com/huggingface/datasets/pull/3187 | 1,040,412,869 | PR_kwDODunzps4t44Ab | 3,187 | Add ChrF(++) (as implemented in sacrebleu) | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,635,670,438,000 | 1,635,864,650,000 | 1,635,863,486,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3187",
"html_url": "https://github.com/huggingface/datasets/pull/3187",
"diff_url": "https://github.com/huggingface/datasets/pull/3187.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3187.patch",
"merged_at": 1635863486000
} | Similar to my [PR for TER](https://github.com/huggingface/datasets/pull/3153), it feels only right to also include ChrF and friends. These are present in Sacrebleu and are therefore very similar to implement as TER and sacrebleu. I tested the implementation with sacrebleu's tests to verify. You can try this below for yourself
```python
import datasets
EPSILON = 1e-4
chrf = datasets.load_metric(r"path\to\datasets\metrics\chrf")
test_cases = [
(["abcdefg"], ["hijklmnop"], 0.0),
(["a"], ["b"], 0.0),
([""], ["b"], 0.0),
([""], ["ref"], 0.0),
([""], ["reference"], 0.0),
(["aa"], ["ab"], 8.3333),
(["a", "b"], ["a", "c"], 8.3333),
(["a"], ["a"], 16.6667),
(["a b c"], ["a b c"], 50.0),
(["a b c"], ["abc"], 50.0),
([" risk assessment must be made of those who are qualified and expertise in the sector - these are the scientists ."],
["risk assessment has to be undertaken by those who are qualified and expert in that area - that is the scientists ."], 63.361730),
([" Die Beziehung zwischen Obama und Netanjahu ist nicht gerade freundlich. "],
["Das Verhältnis zwischen Obama und Netanyahu ist nicht gerade freundschaftlich."], 64.1302698),
(["Niemand hat die Absicht, eine Mauer zu errichten"], ["Niemand hat die Absicht, eine Mauer zu errichten"], 100.0),
]
for hyp, ref, score in test_cases:
# Note the reference transformation which is different from scarebleu's input format
results = chrf.compute(predictions=hyp, references=[[r] for r in ref],
char_order=6, word_order=0, beta=3, eps_smoothing=True)
if abs(score - results["score"]) > EPSILON:
print(f"expected {score}, got {results['score']} for {hyp} - {ref}")
test_cases_effective_order = [
(["a"], ["a"], 100.0),
([""], ["reference"], 0.0),
(["a b c"], ["a b c"], 100.0),
(["a b c"], ["abc"], 100.0),
([""], ["c"], 0.0),
(["a", "b"], ["a", "c"], 50.0),
(["aa"], ["ab"], 25.0),
]
for hyp, ref, score in test_cases_effective_order:
# Note the reference transformation which is different from scarebleu's input format
results = chrf.compute(predictions=hyp, references=[[r] for r in ref],
char_order=6, word_order=0, beta=3, eps_smoothing=False)
if abs(score - results["score"]) > EPSILON:
print(f"expected {score}, got {results['score']} for {hyp} - {ref}")
test_cases_keep_whitespace = [
(
["Die Beziehung zwischen Obama und Netanjahu ist nicht gerade freundlich."],
["Das Verhältnis zwischen Obama und Netanyahu ist nicht gerade freundschaftlich."],
67.3481606,
),
(
["risk assessment must be made of those who are qualified and expertise in the sector - these are the scientists ."],
["risk assessment has to be undertaken by those who are qualified and expert in that area - that is the scientists ."],
65.2414427,
),
]
for hyp, ref, score in test_cases_keep_whitespace:
# Note the reference transformation which is different from scarebleu's input format
results = chrf.compute(predictions=hyp, references=[[r] for r in ref],
char_order=6, word_order=0, beta=3,
whitespace=True)
if abs(score - results["score"]) > EPSILON:
print(f"expected {score}, got {results['score']} for {hyp} - {ref}")
predictions = ["The relationship between Obama and Netanyahu is not exactly friendly."]
references = [["The ties between Obama and Netanyahu are not particularly friendly."]]
print(chrf.compute(predictions=predictions, references=references))
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3187/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3186 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3186/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3186/comments | https://api.github.com/repos/huggingface/datasets/issues/3186/events | https://github.com/huggingface/datasets/issues/3186 | 1,040,369,397 | I_kwDODunzps4-Asb1 | 3,186 | Dataset viewer for nli_tr | {
"login": "e-budur",
"id": 2246791,
"node_id": "MDQ6VXNlcjIyNDY3OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2246791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/e-budur",
"html_url": "https://github.com/e-budur",
"followers_url": "https://api.github.com/users/e-budur/followers",
"following_url": "https://api.github.com/users/e-budur/following{/other_user}",
"gists_url": "https://api.github.com/users/e-budur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/e-budur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/e-budur/subscriptions",
"organizations_url": "https://api.github.com/users/e-budur/orgs",
"repos_url": "https://api.github.com/users/e-budur/repos",
"events_url": "https://api.github.com/users/e-budur/events{/privacy}",
"received_events_url": "https://api.github.com/users/e-budur/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"It's an issue with the streaming mode:\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset = datasets.load_dataset('nli_tr', name='snli_tr',split='test', streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 497, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 494, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 87, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/nli_tr/c2ddd0c0a70caddac6a81c2dae5ca7939f00060d517d08f1983927818dba6521/nli_tr.py\", line 155, in _generate_examples\r\n with codecs.open(filepath, encoding=\"utf-8\") as f:\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/codecs.py\", line 905, in open\r\n file = builtins.open(filename, mode, buffering)\r\nFileNotFoundError: [Errno 2] No such file or directory: 'zip://snli_tr_1.0_test.jsonl::https://tabilab.cmpe.boun.edu.tr/datasets/nli_datasets/snli_tr_1.0.zip'\r\n```\r\n\r\nNote that normal mode is used by the dataset viewer when streaming is failing, but only for the smallest datasets. `nli_tr` is above the limit, hence the error.",
"cc @huggingface/datasets ",
"Apparently there is an issue with the data source URLs: Server Not Found\r\n- https://tabilab.cmpe.boun.edu.tr/datasets/nli_datasets/snli_tr_1.0.zip\r\n\r\nWe are contacting the authors to ask them: \r\n@e-budur you are one of the authors: are you aware of the issue with the URLs of your data ?",
"Reported to their repo:\r\n- https://github.com/boun-tabi/NLI-TR/issues/9",
"The server issue was temporary and is now resolved.",
"Once we have implemented support for streaming, the viewer works: https://huggingface.co/datasets/nli_tr"
] | 1,635,652,593,000 | 1,662,974,134,000 | 1,662,972,189,000 | CONTRIBUTOR | null | null | null | ## Dataset viewer issue for '*nli_tr*'
**Link:** https://huggingface.co/datasets/nli_tr
Hello,
Thank you for the new dataset preview feature that will help the users to view the datasets online.
We just noticed that the dataset viewer widget in the `nli_tr` dataset shows the error below. The error must be due to a temporary problem that may have blocked access to the dataset through the dataset viewer. But the dataset is currently accessible through the link in the error message. May we kindly ask if it would be possible to rerun the job so that it can access the dataset for the dataset viewer function?
Thank you.
Emrah
------------------------------------------
Server Error
Status code: 404
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: 'zip://snli_tr_1.0_train.jsonl::https://tabilab.cmpe.boun.edu.tr/datasets/nli_datasets/snli_tr_1.0.zip
------------------------------------------
Am I the one who added this dataset ? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3186/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3185/comments | https://api.github.com/repos/huggingface/datasets/issues/3185/events | https://github.com/huggingface/datasets/issues/3185 | 1,040,291,961 | I_kwDODunzps4-AZh5 | 3,185 | 7z dataset preview not implemented? | {
"login": "Kirili4ik",
"id": 30757466,
"node_id": "MDQ6VXNlcjMwNzU3NDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/30757466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kirili4ik",
"html_url": "https://github.com/Kirili4ik",
"followers_url": "https://api.github.com/users/Kirili4ik/followers",
"following_url": "https://api.github.com/users/Kirili4ik/following{/other_user}",
"gists_url": "https://api.github.com/users/Kirili4ik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kirili4ik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kirili4ik/subscriptions",
"organizations_url": "https://api.github.com/users/Kirili4ik/orgs",
"repos_url": "https://api.github.com/users/Kirili4ik/repos",
"events_url": "https://api.github.com/users/Kirili4ik/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kirili4ik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"It's a bug in the dataset viewer: the dataset cannot be downloaded in streaming mode, but since the dataset is relatively small, the dataset viewer should have fallback to normal mode. Working on a fix.",
"Fixed. https://huggingface.co/datasets/samsum/viewer/samsum/train\r\n\r\n<img width=\"1563\" alt=\"Capture d’écran 2022-04-12 à 13 47 45\" src=\"https://user-images.githubusercontent.com/1676121/162953339-cd8922d7-9037-408b-b896-eac1af0bb54f.png\">\r\n\r\nThanks for reporting!"
] | 1,635,625,107,000 | 1,649,764,096,000 | 1,649,764,087,000 | NONE | null | null | null | ## Dataset viewer issue for dataset 'samsum'
**Link:** https://huggingface.co/datasets/samsum
Server Error
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol '7z' for file at 'https://arxiv.org/src/1911.12237v2/anc/corpus.7z' is not implemented yet
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3185/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3185/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3184 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3184/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3184/comments | https://api.github.com/repos/huggingface/datasets/issues/3184/events | https://github.com/huggingface/datasets/pull/3184 | 1,040,114,102 | PR_kwDODunzps4t4J61 | 3,184 | RONEC v2 | {
"login": "dumitrescustefan",
"id": 22746816,
"node_id": "MDQ6VXNlcjIyNzQ2ODE2",
"avatar_url": "https://avatars.githubusercontent.com/u/22746816?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dumitrescustefan",
"html_url": "https://github.com/dumitrescustefan",
"followers_url": "https://api.github.com/users/dumitrescustefan/followers",
"following_url": "https://api.github.com/users/dumitrescustefan/following{/other_user}",
"gists_url": "https://api.github.com/users/dumitrescustefan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dumitrescustefan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dumitrescustefan/subscriptions",
"organizations_url": "https://api.github.com/users/dumitrescustefan/orgs",
"repos_url": "https://api.github.com/users/dumitrescustefan/repos",
"events_url": "https://api.github.com/users/dumitrescustefan/events{/privacy}",
"received_events_url": "https://api.github.com/users/dumitrescustefan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq Thanks for the review. I totally understand what you are saying. Normally, I would definitely agree with you, but in this particular case, the quality of v1 is poor, and the dataset itself is small (at the time we created v1 it was the only RO NER dataset, and its size was limited by the available resources). \r\n\r\nThis is why we worked to build a larger one, with much better inter-annotator agreement. Fact is, models trained on v1 will be of very low quality and I would not recommend to anybody to use/do that. That's why I'd strongly suggest we replace v1 with v2, and kindof make v1 vanish :) \r\n\r\nWhat do you think? If you insist on having v1 accessible, I'll add the required code. Thanks!\r\n\r\n",
"Ok I see ! I think it's fine then, no need to re-add V1"
] | 1,635,591,003,000 | 1,635,868,943,000 | 1,635,868,942,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3184",
"html_url": "https://github.com/huggingface/datasets/pull/3184",
"diff_url": "https://github.com/huggingface/datasets/pull/3184.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3184.patch",
"merged_at": 1635868942000
} | Hi, as we've recently finished with the new RONEC (Romanian Named Entity Corpus), we'd like to update the dataset here as well. It's actually essential as links to V1 are no longer valid.
In reality we'd like to replace completely v1, as v2 is a full re-annotation of v1 with additional data (up to 2x size vs v1).
I've run the make style and all the dummy and real data test, and they passed.
I hope it's okay to merge the new RONEC v2 in the datasets.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3184/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3183 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3183/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3183/comments | https://api.github.com/repos/huggingface/datasets/issues/3183/events | https://github.com/huggingface/datasets/pull/3183 | 1,039,761,120 | PR_kwDODunzps4t3Dag | 3,183 | Add missing docstring to DownloadConfig | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,635,526,595,000 | 1,635,848,738,000 | 1,635,848,737,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3183",
"html_url": "https://github.com/huggingface/datasets/pull/3183",
"diff_url": "https://github.com/huggingface/datasets/pull/3183.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3183.patch",
"merged_at": 1635848737000
} | Document the `use_etag` and `num_proc` attributes in `DownloadConig`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3183/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3183/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3182 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3182/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3182/comments | https://api.github.com/repos/huggingface/datasets/issues/3182/events | https://github.com/huggingface/datasets/pull/3182 | 1,039,739,606 | PR_kwDODunzps4t2-9J | 3,182 | Don't memoize strings when hashing since two identical strings may have different python ids | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This change slows down the hash computation a little bit but from my tests it doesn't look too impactful. So I think it's fine to merge this."
] | 1,635,524,777,000 | 1,635,845,738,000 | 1,635,845,737,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3182",
"html_url": "https://github.com/huggingface/datasets/pull/3182",
"diff_url": "https://github.com/huggingface/datasets/pull/3182.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3182.patch",
"merged_at": 1635845737000
} | When hashing an object that has several times the same string, the hashing could return a different hash if the identical strings share the same python `id()` or not.
Here is an example code that shows how the issue can affect the caching:
```python
import json
import pyarrow as pa
from datasets.features import Features
from datasets.fingerprint import Hasher
schema = pa.schema([pa.field("some_string", pa.string()), pa.field("another_string", pa.string())])
features_from_schema = Features.from_arrow_schema(schema)
Hasher.hash(features_from_schema) # dffa9dca9a73fd8c
features_dict = json.loads('{"some_string": {"dtype": "string", "id": null, "_type": "Value"}, "another_string": {"dtype": "string", "id": null, "_type": "Value"}}')
features_from_json = Features.from_dict(features_dict)
Hasher.hash(features_from_json) # 3812e76b15e6420e
features_from_schema == features_from_json # True
```
This is because in `features_dict`, some strings like "dtype" are repeated but don't share the same id, contrary to the ones in `features_from_schema`.
I fixed that by disabling memoization for strings.
This could be optimized in the future by implementing a smarter memoization with a special handling for strings. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3182/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3181 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3181/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3181/comments | https://api.github.com/repos/huggingface/datasets/issues/3181/events | https://github.com/huggingface/datasets/issues/3181 | 1,039,682,097 | I_kwDODunzps49-Eox | 3,181 | `None` converted to `"None"` when loading a dataset | {
"login": "eladsegal",
"id": 13485709,
"node_id": "MDQ6VXNlcjEzNDg1NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eladsegal",
"html_url": "https://github.com/eladsegal",
"followers_url": "https://api.github.com/users/eladsegal/followers",
"following_url": "https://api.github.com/users/eladsegal/following{/other_user}",
"gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions",
"organizations_url": "https://api.github.com/users/eladsegal/orgs",
"repos_url": "https://api.github.com/users/eladsegal/repos",
"events_url": "https://api.github.com/users/eladsegal/events{/privacy}",
"received_events_url": "https://api.github.com/users/eladsegal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @eladsegal, thanks for reporting.\r\n\r\n@mariosasko I saw you are already working on this, but maybe my comment will be useful to you.\r\n\r\nAll values are casted to their corresponding feature type (including `None` values). For example if the feature type is `Value(\"bool\")`, `None` is casted to `False`.\r\n\r\nIt is true that strings were an exception, but this was recently fixed by @lhoestq (see #3158).",
"Thanks for reporting.\r\n\r\nThis is actually a breaking change that I think can cause issues when users preprocess their data. String columns used to be nullable. Maybe we can correct https://github.com/huggingface/datasets/pull/3158 to keep the None values and avoid this breaking change ?\r\n\r\nEDIT: the other types (bool, int, etc) can also become nullable IMO",
"So what would be the best way to handle a feature that can have a null value in some of the instances? So far I used `None`.\r\nUsing the empty string won't be a good option, as it can be an actual value in the data and is not the same as not having a value at all.",
"Hi @eladsegal,\r\n\r\nUse `None`. As @albertvillanova correctly pointed out, this change in conversion was introduced (by mistake) in #3158. To avoid it, install the earlier revision with:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git@8107844ec0e7add005db0585c772ee20adc01a5e\r\n```\r\n\r\nI'm making all the feature types nullable as we speak, and the fix will be merged probably early next week.",
"Hi @mariosasko, is there an estimation as to when this issue will be fixed?",
"https://github.com/huggingface/datasets/pull/3195 fixed it, we'll do a new release soon :)\r\n\r\nFor now feel free to install `datasets` from the master branch",
"Thanks, but unfortunately looks like it isn't fixed yet 😢 \r\n[notebook for 1.14.0](https://colab.research.google.com/drive/1SV3sFXPJMWSQgbm4pr9Y1Q8OJ4JYKcDo?usp=sharing)\r\n[notebook for master](https://colab.research.google.com/drive/145wDpuO74MmsuI0SVLcI1IswG6aHpyhi?usp=sharing)",
"Oh, sorry. I deleted the fix by accident when I was resolving a merge conflict. Let me fix this real quick.",
"Thank you, it works! 🎊 "
] | 1,635,521,033,000 | 1,639,185,400,000 | 1,639,060,017,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
qasper = load_dataset("qasper", split="train", download_mode="reuse_cache_if_exists")
print(qasper[60]["full_text"]["section_name"])
```
When installing version 1.1.40, the output is
`[None, 'Introduction', 'Benchmark Datasets', ...]`
When installing from the master branch, the output is
`['None', 'Introduction', 'Benchmark Datasets', ...]`
Notice how the first element was changed from `NoneType` to `str`.
## Expected results
`None` should stay as is.
## Actual results
`None` is converted to a string.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: master
- Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3181/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3180/comments | https://api.github.com/repos/huggingface/datasets/issues/3180/events | https://github.com/huggingface/datasets/pull/3180 | 1,039,641,316 | PR_kwDODunzps4t2qQn | 3,180 | fix label mapping | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"heck, test failings. moving to draft. will come back to this later today hopefully",
"Thanks for fixing this :)\r\nI just updated the dataset_infos.json and added the missing `pretty_name` tag to the dataset card",
"thank you @lhoestq! running around as always it felt through as a lower priority..."
] | 1,635,518,544,000 | 1,635,860,467,000 | 1,635,849,432,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3180",
"html_url": "https://github.com/huggingface/datasets/pull/3180",
"diff_url": "https://github.com/huggingface/datasets/pull/3180.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3180.patch",
"merged_at": 1635849432000
} | Fixing label mapping for hlgd.
0 correponds to same event and 1 corresponds to different event
<img width="642" alt="Capture d’écran 2021-10-29 à 10 39 58 AM" src="https://user-images.githubusercontent.com/16107619/139454810-1f225e3d-ad48-44a8-b8b1-9205c9533839.png">
<img width="638" alt="Capture d’écran 2021-10-29 à 10 40 09 AM" src="https://user-images.githubusercontent.com/16107619/139454813-93066a3c-7d33-4f56-b133-2f1a7661e438.png">
nt | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3180/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3180/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3179 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3179/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3179/comments | https://api.github.com/repos/huggingface/datasets/issues/3179/events | https://github.com/huggingface/datasets/issues/3179 | 1,039,571,928 | I_kwDODunzps499pvY | 3,179 | Cannot load dataset when the config name is "special" | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"The issue is that the datasets are malformed. Not a bug with the datasets library"
] | 1,635,514,247,000 | 1,635,514,521,000 | 1,635,514,521,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
After https://github.com/huggingface/datasets/pull/3159, we can get the config name of "Check/region_1", which is "Check___region_1".
But now we cannot load the dataset (not sure it's related to the above PR though). It's the case for all the similar datasets, listed in https://github.com/huggingface/datasets-preview-backend/issues/78
## Steps to reproduce the bug
```python
>>> from datasets import get_dataset_config_names
>>> get_dataset_config_names("Check/region_1")
['Check___region_1']
>>> load_dataset("Check/region_1")
Using custom data configuration Check___region_1-d2b3bc48f11c9be2
Downloading and preparing dataset json/Check___region_1 to /home/slesage/.cache/huggingface/datasets/json/Check___region_1-d2b3bc48f11c9be2/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426...
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 4443.12it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1277.19it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 697, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1159, in _prepare_split
writer.write_table(table)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 442, in write_table
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 442, in <listcomp>
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "pyarrow/table.pxi", line 1249, in pyarrow.lib.Table.__getitem__
File "pyarrow/table.pxi", line 1825, in pyarrow.lib.Table.column
File "pyarrow/table.pxi", line 1800, in pyarrow.lib.Table._ensure_integer_index
KeyError: 'Field "builder_name" does not exist in table schema'
```
Loading in streaming mode also returns something strange:
```python
>>> list(load_dataset("Check/region_1", streaming=True, split="train"))
Using custom data configuration Check___region_1-d2b3bc48f11c9be2
[{'builder_name': None, 'citation': '', 'config_name': None, 'dataset_size': None, 'description': '', 'download_checksums': None, 'download_size': None, 'features': {'speech': {'feature': {'dtype': 'float64', 'id': None, '_type': 'Value'}, 'length': -1, 'id': None, '_type': 'Sequence'}, 'sampling_rate': {'dtype': 'int64', 'id': None, '_type': 'Value'}, 'label': {'dtype': 'string', 'id': None, '_type': 'Value'}}, 'homepage': '', 'license': '', 'post_processed': None, 'post_processing_size': None, 'size_in_bytes': None, 'splits': None, 'supervised_keys': None, 'task_templates': None, 'version': None}, {'_data_files': [{'filename': 'dataset.arrow'}], '_fingerprint': 'f1702bb5533c549c', '_format_columns': ['speech', 'sampling_rate', 'label'], '_format_kwargs': {}, '_format_type': None, '_indexes': {}, '_indices_data_files': None, '_output_all_columns': False, '_split': None}]
```
## Expected results
The dataset should be loaded
## Actual results
An error occurs
## Environment info
- `datasets` version: 1.14.1.dev0
- Platform: Linux-5.11.0-1020-aws-x86_64-with-glibc2.31
- Python version: 3.9.6
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3179/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3178/comments | https://api.github.com/repos/huggingface/datasets/issues/3178/events | https://github.com/huggingface/datasets/issues/3178 | 1,039,539,076 | I_kwDODunzps499huE | 3,178 | "Property couldn't be hashed properly" even though fully picklable | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"After some digging, I found that this is caused by `dill` and using `recurse=True)` when trying to dump the object. The problem also occurs without multiprocessing. I can only find [the following information](https://dill.readthedocs.io/en/latest/dill.html#dill._dill.dumps) about this:\r\n\r\n> If recurse=True, then objects referred to in the global dictionary are recursively traced and pickled, instead of the default behavior of attempting to store the entire global dictionary. This is needed for functions defined via exec().\r\n\r\nIn the utils, this is explicitly enabled\r\n\r\nhttps://github.com/huggingface/datasets/blob/df63614223bf1dd1feb267d39d741bada613352c/src/datasets/utils/py_utils.py#L327-L330\r\n\r\nIs this really necessary? Is there a way around it? Also pinging the spaCy team in case this is easy to solve on their end. (I hope so.)",
"Hi ! Thanks for reporting\r\n\r\nYes `recurse=True` is necessary to be able to hash all the objects that are passed to the `map` function\r\n\r\nEDIT: hopefully this object can be serializable soon, but otherwise we can consider adding more control to the user on how to hash objects that are not serializable (as mentioned in https://github.com/huggingface/datasets/issues/3044#issuecomment-948818210)",
"I submitted a PR to spacy that should fix this issue (linked above). I'll leave this open until that PR is merged. ",
"@lhoestq After some testing I find that even with the updated spaCy, no cache files are used. I do not get any warnings though, but I can see that map is run every time I run the code. Do you have thoughts about why? If you want to try the tests below, make sure to install spaCy from [here](https://github.com/BramVanroy/spaCy) and installing the base model with `python -m spacy download en_core_web_sm`.\r\n\r\n```python\r\nfrom functools import partial\r\nfrom pathlib import Path\r\n\r\nimport spacy\r\nfrom datasets import Dataset\r\nimport datasets\r\ndatasets.logging.set_verbosity_debug()\r\n\r\ndef tokenize(nlp, l):\r\n return {\"tok\": [t.text for t in nlp(l[\"text\"])]}\r\n\r\ndef main():\r\n fin = r\"some/file/with/many/lines\"\r\n lines = Path(fin).read_text(encoding=\"utf-8\").splitlines()\r\n nlp = spacy.load(\"en_core_web_sm\")\r\n ds = Dataset.from_dict({\"text\": lines, \"text_id\": list(range(len(lines)))})\r\n tok = partial(tokenize, nlp)\r\n ds = ds.map(tok, load_from_cache_file=True)\r\n print(ds[0:2])\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```\r\n\r\n... or with load_dataset (here I get the message that `load_dataset` can reuse the dataset, but still I see all samples being processed via the tqdm progressbar):\r\n\r\n```python\r\nfrom functools import partial\r\n\r\nimport spacy\r\nfrom datasets import load_dataset\r\nimport datasets\r\ndatasets.logging.set_verbosity_debug()\r\n\r\ndef tokenize(nlp, sample):\r\n return {\"tok\": [t.text for t in nlp(sample[\"text\"])]}\r\n\r\ndef main():\r\n fin = r\"some/file/with/many/lines\"\r\n nlp = spacy.load(\"en_core_web_sm\")\r\n tok_func = partial(tokenize, nlp)\r\n ds = load_dataset('text', data_files=fin)\r\n ds = ds[\"train\"].map(tok_func)\r\n print(ds[0:2])\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```",
"It looks like every time you load `en_core_web_sm` you get a different python object:\r\n```python\r\nimport spacy\r\nfrom datasets.fingerprint import Hasher\r\n\r\nnlp1 = spacy.load(\"en_core_web_sm\")\r\nnlp2 = spacy.load(\"en_core_web_sm\")\r\nHasher.hash(nlp1), Hasher.hash(nlp2)\r\n# ('f6196a33882fea3b', 'a4c676a071f266ff')\r\n```\r\nHere is a list of attributes that have different hashes for `nlp1` and `nlp2`:\r\n- tagger\r\n- parser\r\n- entity\r\n- pipeline (it's the list of the three attributes above)\r\n\r\nI just took a look at the tagger for example and I found subtle differences (there may be other differences though):\r\n```python\r\nnlp1.tagger.model.tok2vec.embed.id, nlp2.tagger.model.tok2vec.embed.id\r\n# (1721, 2243)\r\n```\r\n\r\nWe can try to find all the differences and find the best way to hash those objects properly",
"Thanks for searching! I went looking, and found that this is an implementation detail of thinc\r\n\r\nhttps://github.com/explosion/thinc/blob/68691e303ae68cae4bc803299016f1fc064328bf/thinc/model.py#L96-L98\r\n\r\nPresumably (?) exactly to distinguish between different parts in memory when multiple models are loaded. Do not think that this can be changed on their end - but I will ask what exactly it is for (I'm curious).\r\n\r\nDo you think it is overkill to write something into the hasher explicitly to deal with spaCy models? It seems like something that is beneficial to many, but I do not know if you are open to adding third-party-specific ways to deal with this. If you are, I can have a look for this specific case how we can ignore `thinc.Model.id` from the hasher.",
"It can be even simpler to hash the bytes of the pipeline instead\r\n```python\r\nnlp1.to_bytes() == nlp2.to_bytes() # True\r\n```\r\n\r\nIMO we should integrate the custom hashing for spacy models into `datasets` (we use a custom Pickler for that).\r\nWhat could be done on Spacy's side instead (if they think it's nice to have) is to implement a custom pickling for these classes using `to_bytes`/`from_bytes` to have deterministic pickle dumps.\r\n\r\nFinally I think it would be nice in the future to add an API to let `datasets` users control this kind of things. Something like being able to define your own hashing if you use complex objects.\r\n```python\r\[email protected]_hash(spacy.language.Language)\r\ndef hash_spacy_language(nlp):\r\n return Hasher.hash(nlp.to_bytes())\r\n```",
"I do not quite understand what you mean. as far as I can tell, using `to_bytes` does a pickle dump behind the scene (with `srsly`), recursively using `to_bytes` on the required objects. Therefore, the result of `to_bytes` is a deterministic pickle dump AFAICT. Or do you mean that you wish that using your own pickler and running `dumps(nlp)` should also be deterministic? I guess that would require `__setstate__` and `__getstate__` methods on all the objects that have to/from_bytes. I'll have a listen over at spaCy what they think, and if that would solve the issue. I'll try this locally first, if I find the time.\r\n\r\nI agree that having the option to use a custom hasher would be useful. I like your suggestion!\r\n\r\nEDIT: after trying some things and reading through their API, it seems that they explicitly do not want this. https://spacy.io/usage/saving-loading#pipeline\r\n\r\n> When serializing the pipeline, keep in mind that this will only save out the binary data for the individual components to allow spaCy to restore them – not the entire objects. This is a good thing, because it makes serialization safe. But it also means that you have to take care of storing the config, which contains the pipeline configuration and all the relevant settings.\r\n\r\nBest way forward therefore seems to implement the ability to specify a hasher depending on the objects that are pickled, as you suggested. I can work on this if that is useful. I could use some pointers as to how you would like to implement the `register_hash` functionality though. I assume using `catalogue` over at Explosion might be a good starting point.\r\n\r\n",
"Interestingly, my PR does not solve the issue discussed above. The `tokenize` function hash is different on every run, because for some reason `nlp.__call__` has a different hash every time. The issue therefore seems to run much deeper than I thought. If you have any ideas, I'm all ears.\r\n\r\n```shell\r\ngit clone https://github.com/explosion/spaCy.git\r\ncd spaCy/\r\ngit checkout cab9209c3dfcd1b75dfe5657f10e52c4d847a3cf\r\ncd ..\r\n\r\ngit clone https://github.com/BramVanroy/datasets.git\r\ncd datasets\r\ngit checkout registry\r\npip install -e .\r\npip install ../spaCy\r\nspacy download en_core_web_sm\r\n```\r\n\r\n```python\r\nimport spacy\r\n\r\nfrom datasets import load_dataset\r\nfrom datasets.fingerprint import Hasher\r\nfrom datasets.utils.registry import hashers\r\n\r\[email protected](spacy.Language)\r\ndef hash_spacy_language(nlp):\r\n return Hasher.hash(nlp.to_bytes())\r\n\r\ndef main():\r\n fin = r\"your/large/file\"\r\n nlp = spacy.load(\"en_core_web_sm\")\r\n # This is now always the same yay!\r\n print(Hasher.hash(nlp))\r\n\r\n def tokenize(l):\r\n return {\"tok\": [t.text for t in nlp(l[\"text\"])]}\r\n\r\n ds = load_dataset(\"text\", data_files=fin)\r\n # But this is not...\r\n print(Hasher.hash(tokenize))\r\n # ... because of this\r\n print(Hasher.hash(nlp.__call__))\r\n ds = ds[\"train\"].map(tokenize)\r\n print(ds[0:2])\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```",
"Hi ! I just answered in your PR :) In order for your custom hashing to be used for nested objects, you must integrate it into our recursive pickler that we use for hashing.",
"I don't quite understand the design constraints of `datasets` or the script that you're running, but my usual advice is to avoid using pickle unless you _absolutely_ have to. So for instance instead of doing your `partial` over the `nlp` object itself, can you just pass the string `en_core_web_sm` in? This will mean calling `spacy.load()` inside the work function, but this is no worse than having to call `pickle.load()` on the contents of the NLP object anyway -- in fact you'll generally find `spacy.load()` faster, apart from the disk read.\r\n\r\nIf you need to pass in the bytes data and don't want to read from disk, you could do something like this:\r\n\r\n```\r\nmsg = (nlp.lang, nlp.to_bytes())\r\n\r\ndef unpack(lang, bytes_data):\r\n return spacy.blank(lang).from_bytes(bytes_data)\r\n```\r\n\r\nI think that should probably work: the Thinc `model.to_dict()` method (which is used by the `model.to_bytes()` method) doesn't pack the model's ID into the message, so the `nlp.to_bytes()` that you get shouldn't be affected by the global IDs. So you should get a clean message from `nlp.to_bytes()` that doesn't depend on the global state.",
"Hi Matthew, thanks for chiming in! We are currently implementing exactly what you suggest: `to_bytes()` as a default before pickling - but we may prefer `to_dict` to avoid double dumping.\r\n\r\n`datasets` uses pickle dumps (actually dill) to get unique representations of processing steps (a \"fingerprint\" or hash). So it never needs to re-load that dump - it just needs its value to create a hash. If a fingerprint is identical to a cached fingerprint, then the result can be retrieved from the on-disk cache. (@lhoestq or @mariosasko can correct me if I'm wrong.)\r\n\r\nI was experiencing the issue that parsing with spaCy gave me a different fingerprint on every run of the script and thus it could never load the processed dataset from cache. At first I thought the reason was that spaCy Language objects were not picklable with recursive dill, but even after [adjusting for that](https://github.com/explosion/spaCy/pull/9593) the issue persisted. @lhoestq found that this is due to the changing `id`, which you discussed [here](https://github.com/explosion/spaCy/discussions/9609#discussioncomment-1661081). So yes, you are right. On the surface there simply seems to be an incompatibility between `datasets` default caching functionality as it is currently implemented and `spacy.Language`.\r\n\r\nThe [linked PR](https://github.com/huggingface/datasets/pull/3224) aims to remedy that, though. Up to now I have put some effort into making it easier to define your own \"pickling\" function for a given type (and optionally any of its subclasses). That allows us to tell `datasets` that instead of doing `dill.save(nlp)` (non-deterministic), to use `dill.save(nlp.to_bytes())` (deterministic). When I find some more time, the PR [will be expanded](https://github.com/huggingface/datasets/pull/3224#issuecomment-968958528) to improve the user-experience a bit and add a built-in function to pickle `spacy.Language` as one of the defaults (using `to_bytes()`).",
"Is there a workaround for this? maybe by explicitly requesting datasets to cache the result of `.map()`?",
"Hi ! If your function is not picklable, then the fingerprint of the resulting dataset can't be computed. The fingerprint is a hash that is used by the cache to reload previously computed datasets: the dataset file is named `cache-<fingerprint>.arrow` in your dataset's cache directory.\r\n\r\nAs a workaround you can set the fingerprint that is going to be used by the cache:\r\n```python\r\nresult = my_dataset.map(func, new_fingerprint=new_fingerprint)\r\n```\r\nAny future call to `map` with the same `new_fingerprint` will reload the result from the cache.\r\n\r\n**Be careful using this though: if you change your `func`, be sure to change the `new_fingerprint` as well.**",
"I've been having an issue that might be related to this when trying to pre-tokenize a corpus and caching it for using it later in the pre-training of a RoBERTa model. I always get the following warning:\r\n\r\n```\r\nDataset text downloaded and prepared to /gpfswork/rech/project/user/.cache/hf-datasets/text/default-1850886023af0077/0.0.0/acc32f2f2ef863c93c2f30c52f7df6cc9053a1c2230b8d7da0d210404683ca08. Subsequent calls will reuse this data.\r\nParameter 'function'=<function encode_dataset.<locals>.<lambda> at 0x14a92157b280> of the transform [email protected] couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.\r\n```\r\n\r\nAnd when I launch the pre-training the pre-tokenized corpus is not found and it is tokenized again, which makes me waste precious GPU hours.\r\n\r\nFor me, the workaround was downgrading `dill` and `multiprocess` to the following versions:\r\n\r\n```\r\ndill 0.3.4\r\nmultiprocess 0.70.12.2 \r\n```",
"> Hi ! If your function is not picklable, then the fingerprint of the resulting dataset can't be computed. The fingerprint is a hash that is used by the cache to reload previously computed datasets: the dataset file is named `cache-<fingerprint>.arrow` in your dataset's cache directory.\r\n> \r\n> As a workaround you can set the fingerprint that is going to be used by the cache:\r\n> \r\n> ```python\r\n> result = my_dataset.map(func, new_fingerprint=new_fingerprint)\r\n> ```\r\n> \r\n> Any future call to `map` with the same `new_fingerprint` will reload the result from the cache.\r\n> \r\n> **Be careful using this though: if you change your `func`, be sure to change the `new_fingerprint` as well.**\r\n\r\nIs the argument `new_fingerprint` available for datasetDict ? I can only use it on arrow datasets but might be useful to generalize it to DatasetDict's map as well ? @lhoestq ",
"> I've been having an issue that might be related to this when trying to pre-tokenize a corpus and caching it for using it later in the pre-training of a RoBERTa model. I always get the following warning:\r\n> \r\n> ```\r\n> Dataset text downloaded and prepared to /gpfswork/rech/project/user/.cache/hf-datasets/text/default-1850886023af0077/0.0.0/acc32f2f2ef863c93c2f30c52f7df6cc9053a1c2230b8d7da0d210404683ca08. Subsequent calls will reuse this data.\r\n> Parameter 'function'=<function encode_dataset.<locals>.<lambda> at 0x14a92157b280> of the transform [email protected] couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.\r\n> ```\r\n> \r\n> And when I launch the pre-training the pre-tokenized corpus is not found and it is tokenized again, which makes me waste precious GPU hours.\r\n> \r\n> For me, the workaround was downgrading `dill` and `multiprocess` to the following versions:\r\n> \r\n> ```\r\n> dill 0.3.4\r\n> multiprocess 0.70.12.2 \r\n> ```\r\n\r\nThis worked for me - thanks!",
"I see this has just been closed - it seems quite relevant to another tokenizer I have been trying to use, the `vinai/phobert` family of tokenizers\r\n\r\nhttps://huggingface.co/vinai/phobert-base\r\nhttps://huggingface.co/vinai/phobert-large\r\n\r\nI ran into an issue where a large dataset took several hours to tokenize, the process hung, and I was unable to use the cached version of the tokenized data:\r\n\r\nhttps://discuss.huggingface.co/t/cache-parallelize-long-tokenization-step/25791/3\r\n\r\nI don't see any way to specify the hash of the tokenizer or the fingerprint of the tokenized data to use, so is the tokenized dataset basically lost at this point? Is there a good way to avoid this happening again if I retokenize the data?\r\n",
"In your case it looks like the job failed before caching the data - maybe one of the processes crashed",
"Interesting. Thanks for the observation. Any suggestions on how to start tracking that down? Perhaps run it singlethreaded and see if it crashes?",
"You can monitor your RAM and disk space in case a process dies from OOM or disk full, and when it hangs you can check how many processes are running. IIRC there are other start methods for multiprocessing in python that may show an error message if a process dies.\r\n\r\nRunning on a single process can also help debugging this indeed",
"https://github.com/huggingface/datasets/issues/3178#issuecomment-1189435462\r\n\r\nThe solution does not solve for using commonvoice dataset (\"mozilla-foundation/common_voice_11_0\")",
"Hi @tung-msol could you open a new issue and share the error you got and the map function you used ?"
] | 1,635,512,169,000 | 1,672,846,396,000 | 1,667,409,523,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable.
## Steps to reproduce the bug
Here is a [colab](https://colab.research.google.com/drive/1gt75LCBIzsmBMvvipEOvWulvyZseBiA7?usp=sharing) but for some reason I cannot reproduce it there. That may have to do with logging/tqdm on Colab, or with running things in notebooks. I tried below code on Windows and Ubuntu as a Python script and getting the same issue (warning below).
```python
import pickle
from datasets import load_dataset
import spacy
class Processor:
def __init__(self):
self.nlp = spacy.load("en_core_web_sm", disable=["tagger", "parser", "ner", "lemmatizer"])
@staticmethod
def collate(batch):
return [d["en"] for d in batch]
def parse(self, batch):
batch = batch["translation"]
return {"translation_tok": [{"en_tok": " ".join([t.text for t in doc])} for doc in self.nlp.pipe(self.collate(batch))]}
def process(self):
ds = load_dataset("wmt16", "de-en", split="train[:10%]")
ds = ds.map(self.parse, batched=True, num_proc=6)
if __name__ == '__main__':
pr = Processor()
# succeeds
with open("temp.pkl", "wb") as f:
pickle.dump(pr, f)
print("Successfully pickled!")
pr.process()
```
---
Here is a small change that includes `Hasher.hash` that shows that the hasher cannot seem to successfully pickle parts form the NLP object.
```python
from datasets.fingerprint import Hasher
import pickle
from datasets import load_dataset
import spacy
class Processor:
def __init__(self):
self.nlp = spacy.load("en_core_web_sm", disable=["tagger", "parser", "ner", "lemmatizer"])
@staticmethod
def collate(batch):
return [d["en"] for d in batch]
def parse(self, batch):
batch = batch["translation"]
return {"translation_tok": [{"en_tok": " ".join([t.text for t in doc])} for doc in self.nlp.pipe(self.collate(batch))]}
def process(self):
ds = load_dataset("wmt16", "de-en", split="train[:10]")
return ds.map(self.parse, batched=True)
if __name__ == '__main__':
pr = Processor()
# succeeds
with open("temp.pkl", "wb") as f:
pickle.dump(pr, f)
print("Successfully pickled class instance!")
# succeeds
with open("temp.pkl", "wb") as f:
pickle.dump(pr.nlp, f)
print("Successfully pickled nlp!")
# fails
print(Hasher.hash(pr.nlp))
pr.process()
```
## Expected results
This to be picklable, working (fingerprinted), and no warning.
## Actual results
In the first snippet, I get this warning
Parameter 'function'=<function Processor.parse at 0x7f44982247a0> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.
In the second, I get this traceback which directs to the `Hasher.hash` line.
```
Traceback (most recent call last):
File " \Python\Python36\lib\pickle.py", line 918, in save_global
obj2, parent = _getattribute(module, name)
File " \Python\Python36\lib\pickle.py", line 266, in _getattribute
.format(name, obj))
AttributeError: Can't get local attribute 'add_codes.<locals>.ErrorsWithCodes' on <function add_codes at 0x00000296FF606EA0>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File " scratch_4.py", line 40, in <module>
print(Hasher.hash(pr.nlp))
File " \lib\site-packages\datasets\fingerprint.py", line 191, in hash
return cls.hash_default(value)
File " \lib\site-packages\datasets\fingerprint.py", line 184, in hash_default
return cls.hash_bytes(dumps(value))
File " \lib\site-packages\datasets\utils\py_utils.py", line 345, in dumps
dump(obj, file)
File " \lib\site-packages\datasets\utils\py_utils.py", line 320, in dump
Pickler(file, recurse=True).dump(obj)
File " \lib\site-packages\dill\_dill.py", line 498, in dump
StockPickler.dump(self, obj)
File " \Python\Python36\lib\pickle.py", line 409, in dump
self.save(obj)
File " \Python\Python36\lib\pickle.py", line 521, in save
self.save_reduce(obj=obj, *rv)
File " \Python\Python36\lib\pickle.py", line 634, in save_reduce
save(state)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \lib\site-packages\dill\_dill.py", line 990, in save_module_dict
StockPickler.save_dict(pickler, obj)
File " \Python\Python36\lib\pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File " \Python\Python36\lib\pickle.py", line 847, in _batch_setitems
save(v)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \Python\Python36\lib\pickle.py", line 781, in save_list
self._batch_appends(obj)
File " \Python\Python36\lib\pickle.py", line 805, in _batch_appends
save(x)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \Python\Python36\lib\pickle.py", line 736, in save_tuple
save(element)
File " \Python\Python36\lib\pickle.py", line 521, in save
self.save_reduce(obj=obj, *rv)
File " \Python\Python36\lib\pickle.py", line 634, in save_reduce
save(state)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \Python\Python36\lib\pickle.py", line 736, in save_tuple
save(element)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \lib\site-packages\dill\_dill.py", line 990, in save_module_dict
StockPickler.save_dict(pickler, obj)
File " \Python\Python36\lib\pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File " \Python\Python36\lib\pickle.py", line 847, in _batch_setitems
save(v)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \lib\site-packages\dill\_dill.py", line 1176, in save_instancemethod0
pickler.save_reduce(MethodType, (obj.__func__, obj.__self__), obj=obj)
File " \Python\Python36\lib\pickle.py", line 610, in save_reduce
save(args)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \Python\Python36\lib\pickle.py", line 736, in save_tuple
save(element)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \lib\site-packages\datasets\utils\py_utils.py", line 523, in save_function
obj=obj,
File " \Python\Python36\lib\pickle.py", line 610, in save_reduce
save(args)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \Python\Python36\lib\pickle.py", line 751, in save_tuple
save(element)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \lib\site-packages\dill\_dill.py", line 990, in save_module_dict
StockPickler.save_dict(pickler, obj)
File " \Python\Python36\lib\pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File " \Python\Python36\lib\pickle.py", line 847, in _batch_setitems
save(v)
File " \Python\Python36\lib\pickle.py", line 521, in save
self.save_reduce(obj=obj, *rv)
File " \Python\Python36\lib\pickle.py", line 605, in save_reduce
save(cls)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \lib\site-packages\dill\_dill.py", line 1439, in save_type
StockPickler.save_global(pickler, obj, name=name)
File " \Python\Python36\lib\pickle.py", line 922, in save_global
(obj, module_name, name))
_pickle.PicklingError: Can't pickle <class 'spacy.errors.add_codes.<locals>.ErrorsWithCodes'>: it's not found as spacy.errors.add_codes.<locals>.ErrorsWithCodes
```
## Environment info
Tried on both Linux and Windows
- `datasets` version: 1.14.0
- Platform: Windows-10-10.0.19041-SP0 + Python 3.7.9; Linux-5.11.0-38-generic-x86_64-with-Ubuntu-20.04-focal + Python 3.7.12
- PyArrow version: 6.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3178/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3178/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3177 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3177/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3177/comments | https://api.github.com/repos/huggingface/datasets/issues/3177/events | https://github.com/huggingface/datasets/issues/3177 | 1,039,487,780 | I_kwDODunzps499VMk | 3,177 | More control over TQDM when using map/filter with multiple processes | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi,\r\n\r\nIt's hard to provide an API that would cover all use-cases with tqdm in this project.\r\n\r\nHowever, you can make it work by defining a custom decorator (a bit hacky tho) as follows:\r\n```python\r\nimport datasets\r\n\r\ndef progress_only_on_rank_0(func):\r\n def wrapper(*args, **kwargs):\r\n rank = kwargs.get(\"rank\")\r\n disable_tqdm = kwargs.get(\"disable_tqdm\", False)\r\n disable_tqdm = True if rank is not None and rank > 0 else disable_tqdm\r\n kwargs[\"disable_tqdm\"] = disable_tqdm\r\n return func(*args, **kwargs)\r\n return wrapper\r\n \r\ndatasets.Dataset._map_single = progress_only_on_rank_0(datasets.Dataset._map_single)\r\n``` \r\n\r\nEDIT: Ups, closed by accident.\r\n\r\nThanks for the provided links. `Trainer` requires this for training in multi-node distributed setting. However, `Dataset.map` doesn't support that yet.\r\n\r\nDo you have an API for this in mind? `Dataset.map` is already bloated with the arguments, so IMO it's not a good idea to add a new arg there.\r\n\r\n",
"Inspiration may be found at `transformers`.\r\n\r\nhttps://github.com/huggingface/transformers/blob/4a394cf53f05e73ab9bbb4b179a40236a5ffe45a/src/transformers/trainer.py#L1231-L1233\r\n\r\nTo get unique IDs for each worker, see https://stackoverflow.com/a/10192611/1150683"
] | 1,635,508,576,000 | 1,676,319,400,000 | 1,676,319,400,000 | CONTRIBUTOR | null | null | null | It would help with the clutter in my terminal if tqdm is only shown for rank 0 when using `num_proces>0` in the map and filter methods of datasets.
```python
dataset.map(lambda examples: tokenize(examples["text"]), batched=True, num_proc=6)
```
The above snippet leads to a lot of TQDM bars and depending on your terminal, these will not overwrite but keep pushing each other down.
```
#0: 0%| | 0/13 [00:00<?, ?ba/s]
#1: 0%| | 0/13 [00:00<?, ?ba/s]
#2: 0%| | 0/13 [00:00<?, ?ba/s]
#3: 0%| | 0/13 [00:00<?, ?ba/s]
#4: 0%| | 0/13 [00:00<?, ?ba/s]
#5: 0%| | 0/13 [00:00<?, ?ba/s]
#0: 8%| | 1/13 [00:00<?, ?ba/s]
#1: 8%| | 1/13 [00:00<?, ?ba/s]
...
```
Instead, it would be welcome if we had the option to only show the progress of rank 0. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3177/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3176 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3176/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3176/comments | https://api.github.com/repos/huggingface/datasets/issues/3176/events | https://github.com/huggingface/datasets/pull/3176 | 1,039,068,312 | PR_kwDODunzps4t00xS | 3,176 | OpenSLR dataset: update generate_examples to properly extract data for SLR83 | {
"login": "tyrius02",
"id": 4561309,
"node_id": "MDQ6VXNlcjQ1NjEzMDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tyrius02",
"html_url": "https://github.com/tyrius02",
"followers_url": "https://api.github.com/users/tyrius02/followers",
"following_url": "https://api.github.com/users/tyrius02/following{/other_user}",
"gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions",
"organizations_url": "https://api.github.com/users/tyrius02/orgs",
"repos_url": "https://api.github.com/users/tyrius02/repos",
"events_url": "https://api.github.com/users/tyrius02/events{/privacy}",
"received_events_url": "https://api.github.com/users/tyrius02/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Also fix #3125."
] | 1,635,469,167,000 | 1,636,042,845,000 | 1,635,501,849,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3176",
"html_url": "https://github.com/huggingface/datasets/pull/3176",
"diff_url": "https://github.com/huggingface/datasets/pull/3176.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3176.patch",
"merged_at": 1635501849000
} | Fixed #3168.
The SLR38 indices are CSV files and there wasn't any code in openslr.py to process these files properly. The end result was an empty table.
I've added code to properly process these CSV files. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3176/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3176/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3175 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3175/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3175/comments | https://api.github.com/repos/huggingface/datasets/issues/3175/events | https://github.com/huggingface/datasets/pull/3175 | 1,038,945,271 | PR_kwDODunzps4t0bXw | 3,175 | Add docs for `to_tf_dataset` | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"This looks great, thank you!",
"Thanks !\r\n\r\nFor some reason the new GIF is 6MB, which is a bit heavy for an image on a website. The previous one was around 200KB though which is perfect. For a good experience we usually expect images to be less than 500KB - otherwise for users with poor connection it takes too long to load. Could you try to reduce its size ? Than I think we can merge :)"
] | 1,635,454,522,000 | 1,635,953,976,000 | 1,635,934,043,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3175",
"html_url": "https://github.com/huggingface/datasets/pull/3175",
"diff_url": "https://github.com/huggingface/datasets/pull/3175.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3175.patch",
"merged_at": 1635934043000
} | This PR adds some documentation for new features released in v1.13.0, with the main addition being `to_tf_dataset`:
- Show how to use `to_tf_dataset` in the tutorial, and move `set_format(type='tensorflow'...)` to the Process section (let me know if I'm missing anything @Rocketknight1 😅).
- Add an example for loading dataset from multiple zipped CSV files to the Load section.
- Add an example for removing columns for an `IterableDataset`.
- Add graphic for visualizing streaming. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3175/timeline | null | null | true |