url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
2.12B
| node_id
stringlengths 18
32
| number
int64 1
6.65k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
int64 0
70
| created_at
unknown | updated_at
unknown | closed_at
unknown | author_association
stringclasses 3
values | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/422 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/422/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/422/comments | https://api.github.com/repos/huggingface/datasets/issues/422/events | https://github.com/huggingface/datasets/pull/422 | 663,028,497 | MDExOlB1bGxSZXF1ZXN0NDU0NTE3MDU2 | 422 | - Corrected encoding for IMDB. | {
"avatar_url": "https://avatars.githubusercontent.com/u/25091538?v=4",
"events_url": "https://api.github.com/users/ghazi-f/events{/privacy}",
"followers_url": "https://api.github.com/users/ghazi-f/followers",
"following_url": "https://api.github.com/users/ghazi-f/following{/other_user}",
"gists_url": "https://api.github.com/users/ghazi-f/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghazi-f",
"id": 25091538,
"login": "ghazi-f",
"node_id": "MDQ6VXNlcjI1MDkxNTM4",
"organizations_url": "https://api.github.com/users/ghazi-f/orgs",
"received_events_url": "https://api.github.com/users/ghazi-f/received_events",
"repos_url": "https://api.github.com/users/ghazi-f/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghazi-f/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghazi-f/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghazi-f"
} | [] | closed | false | null | [] | null | 0 | "2020-07-21T13:46:59Z" | "2020-07-22T16:02:53Z" | "2020-07-22T16:02:53Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/422.diff",
"html_url": "https://github.com/huggingface/datasets/pull/422",
"merged_at": "2020-07-22T16:02:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/422.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/422"
} | The preparation phase (after the download phase) crashed on windows because of charmap encoding not being able to decode certain characters. This change suggested in Issue #347 fixes it for the IMDB dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/422/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/422/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/421 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/421/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/421/comments | https://api.github.com/repos/huggingface/datasets/issues/421/events | https://github.com/huggingface/datasets/pull/421 | 662,213,864 | MDExOlB1bGxSZXF1ZXN0NDUzNzkzMzQ1 | 421 | Style change | {
"avatar_url": "https://avatars.githubusercontent.com/u/35500534?v=4",
"events_url": "https://api.github.com/users/lordtt13/events{/privacy}",
"followers_url": "https://api.github.com/users/lordtt13/followers",
"following_url": "https://api.github.com/users/lordtt13/following{/other_user}",
"gists_url": "https://api.github.com/users/lordtt13/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lordtt13",
"id": 35500534,
"login": "lordtt13",
"node_id": "MDQ6VXNlcjM1NTAwNTM0",
"organizations_url": "https://api.github.com/users/lordtt13/orgs",
"received_events_url": "https://api.github.com/users/lordtt13/received_events",
"repos_url": "https://api.github.com/users/lordtt13/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lordtt13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lordtt13/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lordtt13"
} | [] | closed | false | null | [] | null | 3 | "2020-07-20T20:08:29Z" | "2020-07-22T16:08:40Z" | "2020-07-22T16:08:39Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/421.diff",
"html_url": "https://github.com/huggingface/datasets/pull/421",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/421.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/421"
} | make quality and make style ran on scripts | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/421/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/421/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/420 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/420/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/420/comments | https://api.github.com/repos/huggingface/datasets/issues/420/events | https://github.com/huggingface/datasets/pull/420 | 662,029,782 | MDExOlB1bGxSZXF1ZXN0NDUzNjI5OTk2 | 420 | Better handle nested features | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-07-20T16:44:13Z" | "2020-07-21T08:20:49Z" | "2020-07-21T08:09:52Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/420.diff",
"html_url": "https://github.com/huggingface/datasets/pull/420",
"merged_at": "2020-07-21T08:09:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/420.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/420"
} | Changes:
- added arrow schema to features conversion (it's going to be useful to fix #342 )
- make flatten handle deep features (useful for tfrecords conversion in #339 )
- add tests for flatten and features conversions
- the reader now returns the kwargs to instantiate a Dataset (fix circular dependencies) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/420/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/420/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/419 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/419/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/419/comments | https://api.github.com/repos/huggingface/datasets/issues/419/events | https://github.com/huggingface/datasets/pull/419 | 661,974,747 | MDExOlB1bGxSZXF1ZXN0NDUzNTgxNzQz | 419 | EmoContext dataset add | {
"avatar_url": "https://avatars.githubusercontent.com/u/35500534?v=4",
"events_url": "https://api.github.com/users/lordtt13/events{/privacy}",
"followers_url": "https://api.github.com/users/lordtt13/followers",
"following_url": "https://api.github.com/users/lordtt13/following{/other_user}",
"gists_url": "https://api.github.com/users/lordtt13/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lordtt13",
"id": 35500534,
"login": "lordtt13",
"node_id": "MDQ6VXNlcjM1NTAwNTM0",
"organizations_url": "https://api.github.com/users/lordtt13/orgs",
"received_events_url": "https://api.github.com/users/lordtt13/received_events",
"repos_url": "https://api.github.com/users/lordtt13/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lordtt13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lordtt13/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lordtt13"
} | [] | closed | false | null | [] | null | 0 | "2020-07-20T15:48:45Z" | "2020-07-24T08:22:01Z" | "2020-07-24T08:22:00Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/419.diff",
"html_url": "https://github.com/huggingface/datasets/pull/419",
"merged_at": "2020-07-24T08:22:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/419.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/419"
} | EmoContext Dataset add
Signed-off-by: lordtt13 <[email protected]> | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/419/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/419/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/418 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/418/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/418/comments | https://api.github.com/repos/huggingface/datasets/issues/418/events | https://github.com/huggingface/datasets/issues/418 | 661,914,873 | MDU6SXNzdWU2NjE5MTQ4NzM= | 418 | Addition of google drive links to dl_manager | {
"avatar_url": "https://avatars.githubusercontent.com/u/35500534?v=4",
"events_url": "https://api.github.com/users/lordtt13/events{/privacy}",
"followers_url": "https://api.github.com/users/lordtt13/followers",
"following_url": "https://api.github.com/users/lordtt13/following{/other_user}",
"gists_url": "https://api.github.com/users/lordtt13/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lordtt13",
"id": 35500534,
"login": "lordtt13",
"node_id": "MDQ6VXNlcjM1NTAwNTM0",
"organizations_url": "https://api.github.com/users/lordtt13/orgs",
"received_events_url": "https://api.github.com/users/lordtt13/received_events",
"repos_url": "https://api.github.com/users/lordtt13/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lordtt13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lordtt13/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lordtt13"
} | [] | closed | false | null | [] | null | 3 | "2020-07-20T14:52:02Z" | "2020-07-20T15:39:32Z" | "2020-07-20T15:39:32Z" | CONTRIBUTOR | null | null | null | Hello there, I followed the template to create a download script of my own, which works fine for me, although I had to shun the dl_manager because it was downloading nothing from the drive links and instead use gdown.
This is the script for me:
```python
class EmoConfig(nlp.BuilderConfig):
"""BuilderConfig for SQUAD."""
def __init__(self, **kwargs):
"""BuilderConfig for EmoContext.
Args:
**kwargs: keyword arguments forwarded to super.
"""
super(EmoConfig, self).__init__(**kwargs)
_TEST_URL = "https://drive.google.com/file/d/1Hn5ytHSSoGOC4sjm3wYy0Dh0oY_oXBbb/view?usp=sharing"
_TRAIN_URL = "https://drive.google.com/file/d/12Uz59TYg_NtxOy7SXraYeXPMRT7oaO7X/view?usp=sharing"
class EmoDataset(nlp.GeneratorBasedBuilder):
""" SemEval-2019 Task 3: EmoContext Contextual Emotion Detection in Text. Version 1.0.0 """
VERSION = nlp.Version("1.0.0")
force = False
def _info(self):
return nlp.DatasetInfo(
description=_DESCRIPTION,
features=nlp.Features(
{
"text": nlp.Value("string"),
"label": nlp.features.ClassLabel(names=["others", "happy", "sad", "angry"]),
}
),
supervised_keys=None,
homepage="https://www.aclweb.org/anthology/S19-2005/",
citation=_CITATION,
)
def _get_drive_url(self, url):
base_url = 'https://drive.google.com/uc?id='
split_url = url.split('/')
return base_url + split_url[5]
def _split_generators(self, dl_manager):
"""Returns SplitGenerators."""
if(not os.path.exists("emo-train.json") or self.force):
gdown.download(self._get_drive_url(_TRAIN_URL), "emo-train.json", quiet = True)
if(not os.path.exists("emo-test.json") or self.force):
gdown.download(self._get_drive_url(_TEST_URL), "emo-test.json", quiet = True)
return [
nlp.SplitGenerator(
name=nlp.Split.TRAIN,
gen_kwargs={
"filepath": "emo-train.json",
"split": "train",
},
),
nlp.SplitGenerator(
name=nlp.Split.TEST,
gen_kwargs={"filepath": "emo-test.json", "split": "test"},
),
]
def _generate_examples(self, filepath, split):
""" Yields examples. """
with open(filepath, 'rb') as f:
data = json.load(f)
for id_, text, label in zip(data["text"].keys(), data["text"].values(), data["Label"].values()):
yield id_, {
"text": text,
"label": label,
}
```
Can someone help me in adding gdrive links to be used with default dl_manager or adding gdown as another dl_manager, because I'd like to add this dataset to nlp's official database. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/418/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/418/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/417 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/417/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/417/comments | https://api.github.com/repos/huggingface/datasets/issues/417/events | https://github.com/huggingface/datasets/pull/417 | 661,804,054 | MDExOlB1bGxSZXF1ZXN0NDUzNDMyODE5 | 417 | Fix docstrins multiple metrics instances | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-07-20T13:08:59Z" | "2020-07-22T09:51:00Z" | "2020-07-22T09:50:59Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/417.diff",
"html_url": "https://github.com/huggingface/datasets/pull/417",
"merged_at": "2020-07-22T09:50:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/417.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/417"
} | We change the docstrings of `nlp.Metric.compute`, `nlp.Metric.add` and `nlp.Metric.add_batch` depending on which metric is instantiated. However we had issues when instantiating multiple metrics (docstrings were duplicated).
This should fix #304 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/417/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/417/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/416 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/416/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/416/comments | https://api.github.com/repos/huggingface/datasets/issues/416/events | https://github.com/huggingface/datasets/pull/416 | 661,635,393 | MDExOlB1bGxSZXF1ZXN0NDUzMjg1NTM4 | 416 | Fix xtreme panx directory | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 1 | "2020-07-20T10:09:17Z" | "2020-07-21T08:15:46Z" | "2020-07-21T08:15:44Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/416.diff",
"html_url": "https://github.com/huggingface/datasets/pull/416",
"merged_at": "2020-07-21T08:15:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/416.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/416"
} | Fix #412 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/416/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/416/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/415 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/415/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/415/comments | https://api.github.com/repos/huggingface/datasets/issues/415/events | https://github.com/huggingface/datasets/issues/415 | 660,687,076 | MDU6SXNzdWU2NjA2ODcwNzY= | 415 | Something is wrong with WMT 19 kk-en dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/32014649?v=4",
"events_url": "https://api.github.com/users/ChenghaoMou/events{/privacy}",
"followers_url": "https://api.github.com/users/ChenghaoMou/followers",
"following_url": "https://api.github.com/users/ChenghaoMou/following{/other_user}",
"gists_url": "https://api.github.com/users/ChenghaoMou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ChenghaoMou",
"id": 32014649,
"login": "ChenghaoMou",
"node_id": "MDQ6VXNlcjMyMDE0NjQ5",
"organizations_url": "https://api.github.com/users/ChenghaoMou/orgs",
"received_events_url": "https://api.github.com/users/ChenghaoMou/received_events",
"repos_url": "https://api.github.com/users/ChenghaoMou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ChenghaoMou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChenghaoMou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ChenghaoMou"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | open | false | null | [] | null | 0 | "2020-07-19T08:18:51Z" | "2020-07-20T09:54:26Z" | null | NONE | null | null | null | The translation in the `train` set does not look right:
```
>>>import nlp
>>>from nlp import load_dataset
>>>dataset = load_dataset('wmt19', 'kk-en')
>>>dataset["train"]["translation"][0]
{'kk': 'Trumpian Uncertainty', 'en': 'Трамптық белгісіздік'}
>>>dataset["validation"]["translation"][0]
{'kk': 'Ақша-несие саясатының сценарийін қайта жазсақ', 'en': 'Rewriting the Monetary-Policy Script'}
``` | {
"+1": 0,
"-1": 0,
"confused": 1,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/415/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/415/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/414 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/414/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/414/comments | https://api.github.com/repos/huggingface/datasets/issues/414/events | https://github.com/huggingface/datasets/issues/414 | 660,654,013 | MDU6SXNzdWU2NjA2NTQwMTM= | 414 | from_dict delete? | {
"avatar_url": "https://avatars.githubusercontent.com/u/22817243?v=4",
"events_url": "https://api.github.com/users/hackerxiaobai/events{/privacy}",
"followers_url": "https://api.github.com/users/hackerxiaobai/followers",
"following_url": "https://api.github.com/users/hackerxiaobai/following{/other_user}",
"gists_url": "https://api.github.com/users/hackerxiaobai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hackerxiaobai",
"id": 22817243,
"login": "hackerxiaobai",
"node_id": "MDQ6VXNlcjIyODE3MjQz",
"organizations_url": "https://api.github.com/users/hackerxiaobai/orgs",
"received_events_url": "https://api.github.com/users/hackerxiaobai/received_events",
"repos_url": "https://api.github.com/users/hackerxiaobai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hackerxiaobai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hackerxiaobai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hackerxiaobai"
} | [] | closed | false | null | [] | null | 2 | "2020-07-19T07:08:36Z" | "2020-07-21T02:21:17Z" | "2020-07-21T02:21:17Z" | NONE | null | null | null | AttributeError: type object 'Dataset' has no attribute 'from_dict' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/414/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/414/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/413 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/413/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/413/comments | https://api.github.com/repos/huggingface/datasets/issues/413/events | https://github.com/huggingface/datasets/issues/413 | 660,063,655 | MDU6SXNzdWU2NjAwNjM2NTU= | 413 | Is there a way to download only NQ dev? | {
"avatar_url": "https://avatars.githubusercontent.com/u/1563902?v=4",
"events_url": "https://api.github.com/users/tholor/events{/privacy}",
"followers_url": "https://api.github.com/users/tholor/followers",
"following_url": "https://api.github.com/users/tholor/following{/other_user}",
"gists_url": "https://api.github.com/users/tholor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tholor",
"id": 1563902,
"login": "tholor",
"node_id": "MDQ6VXNlcjE1NjM5MDI=",
"organizations_url": "https://api.github.com/users/tholor/orgs",
"received_events_url": "https://api.github.com/users/tholor/received_events",
"repos_url": "https://api.github.com/users/tholor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tholor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tholor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tholor"
} | [] | closed | false | null | [] | null | 3 | "2020-07-18T10:28:23Z" | "2022-02-11T09:50:21Z" | "2022-02-11T09:50:21Z" | NONE | null | null | null | Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)?
As we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data.
I tried
```
dataset = nlp.load_dataset('natural_questions', split="validation", beam_runner="DirectRunner")
```
But this still triggered a big download of presumably the whole dataset. Is there any way of doing this or are splits / slicing options only available after downloading?
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/413/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/413/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/412 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/412/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/412/comments | https://api.github.com/repos/huggingface/datasets/issues/412/events | https://github.com/huggingface/datasets/issues/412 | 660,047,139 | MDU6SXNzdWU2NjAwNDcxMzk= | 412 | Unable to load XTREME dataset from disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [] | closed | false | null | [] | null | 3 | "2020-07-18T09:55:00Z" | "2020-07-21T08:15:44Z" | "2020-07-21T08:15:44Z" | MEMBER | null | null | null | Hi 🤗 team!
## Description of the problem
Following the [docs](https://huggingface.co/nlp/loading_datasets.html?highlight=xtreme#manually-downloading-files) I'm trying to load the `PAN-X.fr` dataset from the [XTREME](https://github.com/google-research/xtreme) benchmark.
I have manually downloaded the `AmazonPhotos.zip` file from [here](https://www.amazon.com/clouddrive/share/d3KGCRCIYwhKJF0H3eWA26hjg2ZCRhjpEQtDL70FSBN?_encoding=UTF8&%2AVersion%2A=1&%2Aentries%2A=0&mgh=1) and am running into a `FileNotFoundError` when I point to the location of the dataset.
As far as I can tell, the problem is that `AmazonPhotos.zip` decompresses to `panx_dataset` and `load_dataset()` is not looking in the correct path:
```
# path where load_dataset is looking for fr.tar.gz
/root/.cache/huggingface/datasets/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6/
# path where it actually exists
/root/.cache/huggingface/datasets/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6/panx_dataset/
```
## Steps to reproduce the problem
1. Manually download the XTREME benchmark from [here](https://www.amazon.com/clouddrive/share/d3KGCRCIYwhKJF0H3eWA26hjg2ZCRhjpEQtDL70FSBN?_encoding=UTF8&%2AVersion%2A=1&%2Aentries%2A=0&mgh=1)
2. Run the following code snippet
```python
from nlp import load_dataset
# AmazonPhotos.zip is in the root of the folder
dataset = load_dataset("xtreme", "PAN-X.fr", data_dir='./')
```
3. Here is the stack trace
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-4-26786bb5fa93> in <module>
----> 1 dataset = load_dataset("xtreme", "PAN-X.fr", data_dir='./')
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
522 download_mode=download_mode,
523 ignore_verifications=ignore_verifications,
--> 524 save_infos=save_infos,
525 )
526
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
430 verify_infos = not save_infos and not ignore_verifications
431 self._download_and_prepare(
--> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
433 )
434 # Sync info
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
464 split_dict = SplitDict(dataset_name=self.name)
465 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 466 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
467 # Checksums verification
468 if verify_infos:
/usr/local/lib/python3.6/dist-packages/nlp/datasets/xtreme/b8c2ed3583a7a7ac60b503576dfed3271ac86757628897e945bd329c43b8a746/xtreme.py in _split_generators(self, dl_manager)
725 panx_dl_dir = dl_manager.extract(panx_path)
726 lang = self.config.name.split(".")[1]
--> 727 lang_folder = dl_manager.extract(os.path.join(panx_dl_dir, lang + ".tar.gz"))
728 return [
729 nlp.SplitGenerator(
/usr/local/lib/python3.6/dist-packages/nlp/utils/download_manager.py in extract(self, path_or_paths)
196 """
197 return map_nested(
--> 198 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths,
199 )
200
/usr/local/lib/python3.6/dist-packages/nlp/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_tuple)
170 return tuple(mapped)
171 # Singleton
--> 172 return function(data_struct)
173
174
/usr/local/lib/python3.6/dist-packages/nlp/utils/download_manager.py in <lambda>(path)
196 """
197 return map_nested(
--> 198 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths,
199 )
200
/usr/local/lib/python3.6/dist-packages/nlp/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
203 elif urlparse(url_or_filename).scheme == "":
204 # File, but it doesn't exist.
--> 205 raise FileNotFoundError("Local file {} doesn't exist".format(url_or_filename))
206 else:
207 # Something unknown
FileNotFoundError: Local file /root/.cache/huggingface/datasets/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6/fr.tar.gz doesn't exist
```
## OS and hardware
```
- `nlp` version: 0.3.0
- Platform: Linux-4.15.0-72-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/412/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/412/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/411 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/411/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/411/comments | https://api.github.com/repos/huggingface/datasets/issues/411/events | https://github.com/huggingface/datasets/pull/411 | 659,393,398 | MDExOlB1bGxSZXF1ZXN0NDUxMjQxOTQy | 411 | Sbf | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | 0 | "2020-07-17T16:19:45Z" | "2020-07-21T09:13:46Z" | "2020-07-21T09:13:45Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/411.diff",
"html_url": "https://github.com/huggingface/datasets/pull/411",
"merged_at": "2020-07-21T09:13:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/411.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/411"
} | This PR adds the Social Bias Frames Dataset (ACL 2020) .
dataset homepage: https://homes.cs.washington.edu/~msap/social-bias-frames/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/411/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/411/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/410 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/410/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/410/comments | https://api.github.com/repos/huggingface/datasets/issues/410/events | https://github.com/huggingface/datasets/pull/410 | 659,242,871 | MDExOlB1bGxSZXF1ZXN0NDUxMTEzMTI3 | 410 | 20newsgroup | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | 0 | "2020-07-17T13:07:57Z" | "2020-07-20T07:05:29Z" | "2020-07-20T07:05:28Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/410.diff",
"html_url": "https://github.com/huggingface/datasets/pull/410",
"merged_at": "2020-07-20T07:05:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/410.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/410"
} | Add 20Newsgroup dataset.
#353 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/410/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/410/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/409 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/409/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/409/comments | https://api.github.com/repos/huggingface/datasets/issues/409/events | https://github.com/huggingface/datasets/issues/409 | 659,128,611 | MDU6SXNzdWU2NTkxMjg2MTE= | 409 | train_test_split error: 'dict' object has no attribute 'deepcopy' | {
"avatar_url": "https://avatars.githubusercontent.com/u/20516801?v=4",
"events_url": "https://api.github.com/users/morganmcg1/events{/privacy}",
"followers_url": "https://api.github.com/users/morganmcg1/followers",
"following_url": "https://api.github.com/users/morganmcg1/following{/other_user}",
"gists_url": "https://api.github.com/users/morganmcg1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/morganmcg1",
"id": 20516801,
"login": "morganmcg1",
"node_id": "MDQ6VXNlcjIwNTE2ODAx",
"organizations_url": "https://api.github.com/users/morganmcg1/orgs",
"received_events_url": "https://api.github.com/users/morganmcg1/received_events",
"repos_url": "https://api.github.com/users/morganmcg1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/morganmcg1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/morganmcg1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/morganmcg1"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 2 | "2020-07-17T10:36:28Z" | "2020-07-21T14:34:52Z" | "2020-07-21T14:34:52Z" | NONE | null | null | null | `train_test_split` is giving me an error when I try and call it:
`'dict' object has no attribute 'deepcopy'`
## To reproduce
```
dataset = load_dataset('glue', 'mrpc', split='train')
dataset = dataset.train_test_split(test_size=0.2)
```
## Full Stacktrace
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-12-feb740dbec9a> in <module>
1 dataset = load_dataset('glue', 'mrpc', split='train')
----> 2 dataset = dataset.train_test_split(test_size=0.2)
~/anaconda3/envs/fastai2_me/lib/python3.7/site-packages/nlp/arrow_dataset.py in train_test_split(self, test_size, train_size, shuffle, seed, generator, keep_in_memory, load_from_cache_file, train_cache_file_name, test_cache_file_name, writer_batch_size)
1032 "writer_batch_size": writer_batch_size,
1033 }
-> 1034 train_kwargs = cache_kwargs.deepcopy()
1035 train_kwargs["split"] = "train"
1036 test_kwargs = cache_kwargs.deepcopy()
AttributeError: 'dict' object has no attribute 'deepcopy'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/409/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/409/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/408 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/408/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/408/comments | https://api.github.com/repos/huggingface/datasets/issues/408/events | https://github.com/huggingface/datasets/pull/408 | 659,064,144 | MDExOlB1bGxSZXF1ZXN0NDUwOTU1MTE0 | 408 | Add tests datasets gcp | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-07-17T09:23:27Z" | "2020-07-17T09:26:57Z" | "2020-07-17T09:26:56Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/408.diff",
"html_url": "https://github.com/huggingface/datasets/pull/408",
"merged_at": "2020-07-17T09:26:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/408.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/408"
} | Some datasets are available on our google cloud storage in arrow format, so that the users don't need to process the data.
These tests make sure that they're always available. It also makes sure that their scripts are in sync between S3 and the repo.
This should avoid future issues like #407 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/408/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/408/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/407 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/407/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/407/comments | https://api.github.com/repos/huggingface/datasets/issues/407/events | https://github.com/huggingface/datasets/issues/407 | 658,672,736 | MDU6SXNzdWU2NTg2NzI3MzY= | 407 | MissingBeamOptions for Wikipedia 20200501.en | {
"avatar_url": "https://avatars.githubusercontent.com/u/7490438?v=4",
"events_url": "https://api.github.com/users/mitchellgordon95/events{/privacy}",
"followers_url": "https://api.github.com/users/mitchellgordon95/followers",
"following_url": "https://api.github.com/users/mitchellgordon95/following{/other_user}",
"gists_url": "https://api.github.com/users/mitchellgordon95/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mitchellgordon95",
"id": 7490438,
"login": "mitchellgordon95",
"node_id": "MDQ6VXNlcjc0OTA0Mzg=",
"organizations_url": "https://api.github.com/users/mitchellgordon95/orgs",
"received_events_url": "https://api.github.com/users/mitchellgordon95/received_events",
"repos_url": "https://api.github.com/users/mitchellgordon95/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mitchellgordon95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mitchellgordon95/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mitchellgordon95"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 4 | "2020-07-16T23:48:03Z" | "2021-01-12T11:41:16Z" | "2020-07-17T14:24:28Z" | CONTRIBUTOR | null | null | null | There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available):
```
nlp.load_dataset('wikipedia', "20200501.en", split='train')
```
And now, having pulled master, I get:
```
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, total: 34.06 GiB) to /home/hltcoe/mgordon/.cache/huggingface/datasets/wikipedia/20200501.en/1.0.0/76b0b2747b679bb0ee7a1621e50e5a6378477add0c662668a324a5bc07d516dd...
Traceback (most recent call last):
File "scripts/download.py", line 11, in <module>
fire.Fire(download_pretrain)
File "/home/hltcoe/mgordon/.conda/envs/huggingface/lib/python3.6/site-packages/fire/core.py", line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/hltcoe/mgordon/.conda/envs/huggingface/lib/python3.6/site-packages/fire/core.py", line 468, in _Fire
target=component.__name__)
File "/home/hltcoe/mgordon/.conda/envs/huggingface/lib/python3.6/site-packages/fire/core.py", line 672, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "scripts/download.py", line 6, in download_pretrain
nlp.load_dataset('wikipedia', "20200501.en", split='train')
File "/exp/mgordon/nlp/src/nlp/load.py", line 534, in load_dataset
save_infos=save_infos,
File "/exp/mgordon/nlp/src/nlp/builder.py", line 460, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/exp/mgordon/nlp/src/nlp/builder.py", line 870, in _download_and_prepare
"\n\t`{}`".format(usage_example)
nlp.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, S
park, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/
If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory).
Example of usage:
`load_dataset('wikipedia', '20200501.en', beam_runner='DirectRunner')`
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/407/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/407/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/406 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/406/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/406/comments | https://api.github.com/repos/huggingface/datasets/issues/406/events | https://github.com/huggingface/datasets/issues/406 | 658,581,764 | MDU6SXNzdWU2NTg1ODE3NjQ= | 406 | Faster Shuffling? | {
"avatar_url": "https://avatars.githubusercontent.com/u/7490438?v=4",
"events_url": "https://api.github.com/users/mitchellgordon95/events{/privacy}",
"followers_url": "https://api.github.com/users/mitchellgordon95/followers",
"following_url": "https://api.github.com/users/mitchellgordon95/following{/other_user}",
"gists_url": "https://api.github.com/users/mitchellgordon95/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mitchellgordon95",
"id": 7490438,
"login": "mitchellgordon95",
"node_id": "MDQ6VXNlcjc0OTA0Mzg=",
"organizations_url": "https://api.github.com/users/mitchellgordon95/orgs",
"received_events_url": "https://api.github.com/users/mitchellgordon95/received_events",
"repos_url": "https://api.github.com/users/mitchellgordon95/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mitchellgordon95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mitchellgordon95/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mitchellgordon95"
} | [] | closed | false | null | [] | null | 7 | "2020-07-16T21:21:53Z" | "2023-08-16T09:52:39Z" | "2020-09-07T14:45:25Z" | CONTRIBUTOR | null | null | null | Consider shuffling bookcorpus:
```
dataset = nlp.load_dataset('bookcorpus', split='train')
dataset.shuffle()
```
According to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `writer_batch_size=1000`.
But I can also just write the lines to a text file:
```
batch_size = 100000
with open('tmp.txt', 'w+') as out_f:
for i in tqdm(range(0, len(dataset), batch_size)):
batch = dataset[i:i+batch_size]['text']
print("\n".join(batch), file=out_f)
```
Which completes in a couple minutes, followed by `shuf tmp.txt > tmp2.txt` which completes in under a minute. And finally,
```
dataset = nlp.load_dataset('text', data_files='tmp2.txt')
```
Which completes in under 10 minutes. I read up on Apache Arrow this morning, and it seems like the columnar data format is not especially well-suited to shuffling rows, since moving items around requires a lot of book-keeping.
Is shuffle inherently slow, or am I just using it wrong? And if it is slow, would it make sense to try converting the data to a row-based format on disk and then shuffling? (Instead of calling select with a random permutation, as is currently done.) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/406/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/406/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/405 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/405/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/405/comments | https://api.github.com/repos/huggingface/datasets/issues/405/events | https://github.com/huggingface/datasets/pull/405 | 658,580,192 | MDExOlB1bGxSZXF1ZXN0NDUwNTI1MTc3 | 405 | Make select() faster by batching reads | {
"avatar_url": "https://avatars.githubusercontent.com/u/7490438?v=4",
"events_url": "https://api.github.com/users/mitchellgordon95/events{/privacy}",
"followers_url": "https://api.github.com/users/mitchellgordon95/followers",
"following_url": "https://api.github.com/users/mitchellgordon95/following{/other_user}",
"gists_url": "https://api.github.com/users/mitchellgordon95/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mitchellgordon95",
"id": 7490438,
"login": "mitchellgordon95",
"node_id": "MDQ6VXNlcjc0OTA0Mzg=",
"organizations_url": "https://api.github.com/users/mitchellgordon95/orgs",
"received_events_url": "https://api.github.com/users/mitchellgordon95/received_events",
"repos_url": "https://api.github.com/users/mitchellgordon95/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mitchellgordon95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mitchellgordon95/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mitchellgordon95"
} | [] | closed | false | null | [] | null | 0 | "2020-07-16T21:19:45Z" | "2020-07-17T17:05:44Z" | "2020-07-17T16:51:26Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/405.diff",
"html_url": "https://github.com/huggingface/datasets/pull/405",
"merged_at": "2020-07-17T16:51:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/405.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/405"
} | Here's a benchmark:
```
dataset = nlp.load_dataset('bookcorpus', split='train')
start = time.time()
dataset.select(np.arange(1000), reader_batch_size=1, load_from_cache_file=False)
end = time.time()
print(f'{end - start}')
start = time.time()
dataset.select(np.arange(1000), reader_batch_size=1000, load_from_cache_file=False)
end = time.time()
print(f'{end - start}')
```
Without batching, select takes around 1.27 seconds. With batching, it takes around 0.01 seconds. The slowness was upsetting me because dataset.shuffle() was supposed to take ~27 hours for bookcorpus. Now with the fix it takes ~2.5 hours (which still is pretty slow, but I'll open a separate issue for that). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/405/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/405/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/404 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/404/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/404/comments | https://api.github.com/repos/huggingface/datasets/issues/404/events | https://github.com/huggingface/datasets/pull/404 | 658,400,987 | MDExOlB1bGxSZXF1ZXN0NDUwMzY4Mjg4 | 404 | Add seed in metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-07-16T17:27:05Z" | "2020-07-20T10:12:35Z" | "2020-07-20T10:12:34Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/404.diff",
"html_url": "https://github.com/huggingface/datasets/pull/404",
"merged_at": "2020-07-20T10:12:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/404.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/404"
} | With #361 we noticed that some metrics were not deterministic.
In this PR I allow the user to specify numpy's seed when instantiating a metric with `load_metric`.
The seed is set only when `compute` is called, and reset afterwards.
Moreover when calling `compute` with the same metric instance (i.e. same experiment_id), the metric will always return the same results given the same inputs. This is the case even if the seed is was not specified by the user, as the previous seed is going to be reused.
However, instantiating twice a metric (two different experiments) without specifying a seed can create different results. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/404/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/404/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/403 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/403/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/403/comments | https://api.github.com/repos/huggingface/datasets/issues/403/events | https://github.com/huggingface/datasets/pull/403 | 658,325,756 | MDExOlB1bGxSZXF1ZXN0NDUwMzAzNjI2 | 403 | return python objects instead of arrays by default | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-07-16T15:51:52Z" | "2020-07-17T11:37:01Z" | "2020-07-17T11:37:00Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/403.diff",
"html_url": "https://github.com/huggingface/datasets/pull/403",
"merged_at": "2020-07-17T11:37:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/403.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/403"
} | We were using to_pandas() to convert from arrow types, however it returns numpy arrays instead of python lists.
I fixed it by using to_pydict/to_pylist instead.
Fix #387
It was mentioned in https://github.com/huggingface/transformers/issues/5729
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/403/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/403/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/402 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/402/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/402/comments | https://api.github.com/repos/huggingface/datasets/issues/402/events | https://github.com/huggingface/datasets/pull/402 | 658,001,288 | MDExOlB1bGxSZXF1ZXN0NDUwMDI2NTE0 | 402 | Search qa | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | 0 | "2020-07-16T09:00:10Z" | "2020-07-16T14:27:00Z" | "2020-07-16T14:26:59Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/402.diff",
"html_url": "https://github.com/huggingface/datasets/pull/402",
"merged_at": "2020-07-16T14:26:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/402.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/402"
} | add SearchQA dataset
#336 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/402/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/402/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/401 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/401/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/401/comments | https://api.github.com/repos/huggingface/datasets/issues/401/events | https://github.com/huggingface/datasets/pull/401 | 657,996,252 | MDExOlB1bGxSZXF1ZXN0NDUwMDIyNTc0 | 401 | add web_questions | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | 3 | "2020-07-16T08:54:59Z" | "2020-08-06T06:16:20Z" | "2020-08-06T06:16:19Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/401.diff",
"html_url": "https://github.com/huggingface/datasets/pull/401",
"merged_at": "2020-08-06T06:16:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/401.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/401"
} | add Web Question dataset
#336
Maybe @patrickvonplaten you can help with the dummy_data structure? it still broken | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/401/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/401/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/400 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/400/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/400/comments | https://api.github.com/repos/huggingface/datasets/issues/400/events | https://github.com/huggingface/datasets/pull/400 | 657,975,600 | MDExOlB1bGxSZXF1ZXN0NDUwMDA1MDU5 | 400 | Web questions | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | 0 | "2020-07-16T08:28:29Z" | "2020-07-16T08:50:51Z" | "2020-07-16T08:42:54Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/400.diff",
"html_url": "https://github.com/huggingface/datasets/pull/400",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/400.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/400"
} | add the WebQuestion dataset
#336 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/400/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/400/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/399 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/399/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/399/comments | https://api.github.com/repos/huggingface/datasets/issues/399/events | https://github.com/huggingface/datasets/pull/399 | 657,841,433 | MDExOlB1bGxSZXF1ZXN0NDQ5ODkxNTEy | 399 | Spelling mistake | {
"avatar_url": "https://avatars.githubusercontent.com/u/9410067?v=4",
"events_url": "https://api.github.com/users/BlancRay/events{/privacy}",
"followers_url": "https://api.github.com/users/BlancRay/followers",
"following_url": "https://api.github.com/users/BlancRay/following{/other_user}",
"gists_url": "https://api.github.com/users/BlancRay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BlancRay",
"id": 9410067,
"login": "BlancRay",
"node_id": "MDQ6VXNlcjk0MTAwNjc=",
"organizations_url": "https://api.github.com/users/BlancRay/orgs",
"received_events_url": "https://api.github.com/users/BlancRay/received_events",
"repos_url": "https://api.github.com/users/BlancRay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BlancRay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BlancRay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BlancRay"
} | [] | closed | false | null | [] | null | 1 | "2020-07-16T04:37:58Z" | "2020-07-16T06:49:48Z" | "2020-07-16T06:49:37Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/399.diff",
"html_url": "https://github.com/huggingface/datasets/pull/399",
"merged_at": "2020-07-16T06:49:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/399.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/399"
} | In "Formatting the dataset" part, "The two toehr modifications..." should be "The two other modifications..." ,the word "other" wrong spelled as "toehr". | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/399/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/399/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/398 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/398/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/398/comments | https://api.github.com/repos/huggingface/datasets/issues/398/events | https://github.com/huggingface/datasets/pull/398 | 657,511,962 | MDExOlB1bGxSZXF1ZXN0NDQ5NjE1OTk1 | 398 | Add inline links | {
"avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4",
"events_url": "https://api.github.com/users/bharatr21/events{/privacy}",
"followers_url": "https://api.github.com/users/bharatr21/followers",
"following_url": "https://api.github.com/users/bharatr21/following{/other_user}",
"gists_url": "https://api.github.com/users/bharatr21/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bharatr21",
"id": 13381361,
"login": "bharatr21",
"node_id": "MDQ6VXNlcjEzMzgxMzYx",
"organizations_url": "https://api.github.com/users/bharatr21/orgs",
"received_events_url": "https://api.github.com/users/bharatr21/received_events",
"repos_url": "https://api.github.com/users/bharatr21/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bharatr21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bharatr21/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bharatr21"
} | [] | closed | false | null | [] | null | 2 | "2020-07-15T17:04:04Z" | "2020-07-22T10:14:22Z" | "2020-07-22T10:14:22Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/398.diff",
"html_url": "https://github.com/huggingface/datasets/pull/398",
"merged_at": "2020-07-22T10:14:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/398.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/398"
} | Add inline links to `Contributing.md` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/398/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/398/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/397 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/397/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/397/comments | https://api.github.com/repos/huggingface/datasets/issues/397/events | https://github.com/huggingface/datasets/pull/397 | 657,510,856 | MDExOlB1bGxSZXF1ZXN0NDQ5NjE1MDA4 | 397 | Add contiguous sharding | {
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen"
} | [] | closed | false | null | [] | null | 0 | "2020-07-15T17:02:58Z" | "2020-07-17T16:59:31Z" | "2020-07-17T16:59:31Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/397.diff",
"html_url": "https://github.com/huggingface/datasets/pull/397",
"merged_at": "2020-07-17T16:59:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/397.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/397"
} | This makes dset.shard() play nice with nlp.concatenate_datasets(). When I originally wrote the shard() method, I was thinking about a distributed training scenario, but https://github.com/huggingface/nlp/pull/389 also uses it for splitting the dataset for distributed preprocessing.
Usage:
```
nlp.concatenate_datasets([dset.shard(n, i, contiguous=True) for i in range(n)])
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/397/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/397/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/396 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/396/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/396/comments | https://api.github.com/repos/huggingface/datasets/issues/396/events | https://github.com/huggingface/datasets/pull/396 | 657,477,952 | MDExOlB1bGxSZXF1ZXN0NDQ5NTg3MDQ4 | 396 | Fix memory issue when doing select | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-07-15T16:15:04Z" | "2020-07-16T08:07:32Z" | "2020-07-16T08:07:31Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/396.diff",
"html_url": "https://github.com/huggingface/datasets/pull/396",
"merged_at": "2020-07-16T08:07:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/396.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/396"
} | We were passing the `nlp.Dataset` object to get the hash for the new dataset's file name.
Fix #395 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/396/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/396/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/395 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/395/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/395/comments | https://api.github.com/repos/huggingface/datasets/issues/395/events | https://github.com/huggingface/datasets/issues/395 | 657,454,983 | MDU6SXNzdWU2NTc0NTQ5ODM= | 395 | Memory issue when doing select | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 1 | "2020-07-15T15:43:38Z" | "2020-07-16T08:07:31Z" | "2020-07-16T08:07:31Z" | MEMBER | null | null | null | As noticed in #389, the following code loads the entire wikipedia in memory.
```python
import nlp
w = nlp.load_dataset("wikipedia", "20200501.en", split="train")
w.select([0])
```
This is caused by [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/arrow_dataset.py#L626) for some reason, that tries to serialize the function with all the wikipedia data with it.
It's not the case with `.map` or `.filter`.
However functions that are based on `.select` like `.shuffle`, `.shard`, `.train_test_split`, `.sort` are affected.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/395/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/395/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/394 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/394/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/394/comments | https://api.github.com/repos/huggingface/datasets/issues/394/events | https://github.com/huggingface/datasets/pull/394 | 657,425,548 | MDExOlB1bGxSZXF1ZXN0NDQ5NTQzNTE0 | 394 | Remove remaining nested dict | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | 0 | "2020-07-15T15:05:52Z" | "2020-07-16T07:39:52Z" | "2020-07-16T07:39:51Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/394.diff",
"html_url": "https://github.com/huggingface/datasets/pull/394",
"merged_at": "2020-07-16T07:39:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/394.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/394"
} | This PR deletes the remaining unnecessary nested dict
#378 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/394/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/394/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/393 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/393/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/393/comments | https://api.github.com/repos/huggingface/datasets/issues/393/events | https://github.com/huggingface/datasets/pull/393 | 657,330,911 | MDExOlB1bGxSZXF1ZXN0NDQ5NDY1MTAz | 393 | Fix extracted files directory for the DownloadManager | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-07-15T12:59:55Z" | "2020-07-17T17:02:16Z" | "2020-07-17T17:02:14Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/393.diff",
"html_url": "https://github.com/huggingface/datasets/pull/393",
"merged_at": "2020-07-17T17:02:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/393.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/393"
} | The cache dir was often cluttered by extracted files because of the download manager.
For downloaded files, we are using the `downloads` directory to make things easier to navigate, but extracted files were still placed at the root of the cache directory. To fix that I changed the directory for extracted files to cache_dir/downloads/extracted. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/393/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/393/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/392 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/392/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/392/comments | https://api.github.com/repos/huggingface/datasets/issues/392/events | https://github.com/huggingface/datasets/pull/392 | 657,313,738 | MDExOlB1bGxSZXF1ZXN0NDQ5NDUwOTkx | 392 | Style change detection | {
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghomasHudson",
"id": 13795113,
"login": "ghomasHudson",
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghomasHudson"
} | [] | closed | false | null | [] | null | 0 | "2020-07-15T12:32:14Z" | "2020-07-21T13:18:36Z" | "2020-07-17T17:13:23Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/392.diff",
"html_url": "https://github.com/huggingface/datasets/pull/392",
"merged_at": "2020-07-17T17:13:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/392.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/392"
} | Another [PAN task](https://pan.webis.de/clef20/pan20-web/style-change-detection.html). This time about identifying when the style/author changes in documents.
- There's the possibility of adding the [PAN19](https://zenodo.org/record/3577602) and PAN18 style change detection tasks too (these are datasets whose labels are a subset of PAN20's). These would probably make more sense as separate datasets (like wmt is now)
- I've converted the integer 0,1 values to a boolean
- Using manually downloaded data again. This might be changed at some point following the discussion in https://github.com/huggingface/nlp/pull/349. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/392/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/392/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/390 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/390/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/390/comments | https://api.github.com/repos/huggingface/datasets/issues/390/events | https://github.com/huggingface/datasets/pull/390 | 656,956,384 | MDExOlB1bGxSZXF1ZXN0NDQ5MTYxMzY3 | 390 | Concatenate datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen"
} | [] | closed | false | null | [] | null | 6 | "2020-07-14T23:24:37Z" | "2020-07-22T09:49:58Z" | "2020-07-22T09:49:58Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/390.diff",
"html_url": "https://github.com/huggingface/datasets/pull/390",
"merged_at": "2020-07-22T09:49:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/390.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/390"
} | I'm constructing the "WikiBooks" dataset, which is a concatenation of Wikipedia & BookCorpus. So I implemented the `Dataset.from_concat()` method, which concatenates two datasets with the same schema.
This would also be useful if someone wants to pretrain on a large generic dataset + their own custom dataset. Not in love with the method name, so would love to hear suggestions.
Usage:
```python
from nlp import Dataset, load_dataset
data1, data2 = {"id": [0, 1, 2]}, {"id": [3, 4, 5]}
dset1, dset2 = Dataset.from_dict(data1), Dataset.from_dict(data2)
dset_concat = Dataset.from_concat([dset1, dset2])
print(dset_concat)
# Dataset(schema: {'id': 'int64'}, num_rows: 6)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/390/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/390/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/389 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/389/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/389/comments | https://api.github.com/repos/huggingface/datasets/issues/389/events | https://github.com/huggingface/datasets/pull/389 | 656,921,768 | MDExOlB1bGxSZXF1ZXN0NDQ5MTMyOTU5 | 389 | Fix pickling of SplitDict | {
"avatar_url": "https://avatars.githubusercontent.com/u/7490438?v=4",
"events_url": "https://api.github.com/users/mitchellgordon95/events{/privacy}",
"followers_url": "https://api.github.com/users/mitchellgordon95/followers",
"following_url": "https://api.github.com/users/mitchellgordon95/following{/other_user}",
"gists_url": "https://api.github.com/users/mitchellgordon95/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mitchellgordon95",
"id": 7490438,
"login": "mitchellgordon95",
"node_id": "MDQ6VXNlcjc0OTA0Mzg=",
"organizations_url": "https://api.github.com/users/mitchellgordon95/orgs",
"received_events_url": "https://api.github.com/users/mitchellgordon95/received_events",
"repos_url": "https://api.github.com/users/mitchellgordon95/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mitchellgordon95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mitchellgordon95/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mitchellgordon95"
} | [] | closed | false | null | [] | null | 11 | "2020-07-14T21:53:39Z" | "2020-08-04T14:38:10Z" | "2020-08-04T14:38:10Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/389.diff",
"html_url": "https://github.com/huggingface/datasets/pull/389",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/389.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/389"
} | It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, 'sentencized_wiki_dataset.pt')
```
However, upon unpickling the dataset via torch.load(...), this error is raised:
```
ValueError("Cannot add elem. Use .add() instead.")
```
On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class.
The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`.
Testing:
- Manually pickled and unpickled a modified wikipedia dataset.
- Ran `make style`
I would be happy to run any other tests, but I couldn't find any in the contributing guidelines. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/389/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/389/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/388 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/388/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/388/comments | https://api.github.com/repos/huggingface/datasets/issues/388/events | https://github.com/huggingface/datasets/issues/388 | 656,707,497 | MDU6SXNzdWU2NTY3MDc0OTc= | 388 | 🐛 [Dataset] Cannot download wmt14, wmt15 and wmt17 | {
"avatar_url": "https://avatars.githubusercontent.com/u/2826602?v=4",
"events_url": "https://api.github.com/users/SamuelCahyawijaya/events{/privacy}",
"followers_url": "https://api.github.com/users/SamuelCahyawijaya/followers",
"following_url": "https://api.github.com/users/SamuelCahyawijaya/following{/other_user}",
"gists_url": "https://api.github.com/users/SamuelCahyawijaya/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SamuelCahyawijaya",
"id": 2826602,
"login": "SamuelCahyawijaya",
"node_id": "MDQ6VXNlcjI4MjY2MDI=",
"organizations_url": "https://api.github.com/users/SamuelCahyawijaya/orgs",
"received_events_url": "https://api.github.com/users/SamuelCahyawijaya/received_events",
"repos_url": "https://api.github.com/users/SamuelCahyawijaya/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SamuelCahyawijaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamuelCahyawijaya/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SamuelCahyawijaya"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
] | null | 5 | "2020-07-14T15:36:41Z" | "2022-10-04T18:01:28Z" | "2022-10-04T18:01:28Z" | NONE | null | null | null | 1. I try downloading `wmt14`, `wmt15`, `wmt17`, `wmt19` with the following code:
```
nlp.load_dataset('wmt14','de-en')
nlp.load_dataset('wmt15','de-en')
nlp.load_dataset('wmt17','de-en')
nlp.load_dataset('wmt19','de-en')
```
The code runs but the download speed is **extremely slow**, the same behaviour is not observed on `wmt16` and `wmt18`
2. When trying to download `wmt17 zh-en`, I got the following error:
> ConnectionError: Couldn't reach https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-zh.tar.gz | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/388/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/388/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/387 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/387/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/387/comments | https://api.github.com/repos/huggingface/datasets/issues/387/events | https://github.com/huggingface/datasets/issues/387 | 656,361,357 | MDU6SXNzdWU2NTYzNjEzNTc= | 387 | Conversion through to_pandas output numpy arrays for lists instead of python objects | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [] | closed | false | null | [] | null | 1 | "2020-07-14T06:24:01Z" | "2020-07-17T11:37:00Z" | "2020-07-17T11:37:00Z" | MEMBER | null | null | null | In a related question, the conversion through to_pandas output numpy arrays for the lists instead of python objects.
Here is an example:
```python
>>> dataset._data.slice(key, 1).to_pandas().to_dict("list")
{'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .'], 'sentence2': ['Referring to him as only " the witness " , Amrozi accused his brother of deliberately distorting his evidence .'], 'label': [1], 'idx': [0], 'input_ids': [array([ 101, 7277, 2180, 5303, 4806, 1117, 1711, 117, 2292,
1119, 1270, 107, 1103, 7737, 107, 117, 1104, 9938,
4267, 12223, 21811, 1117, 2554, 119, 102])], 'token_type_ids': [array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0])], 'attention_mask': [array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1])]}
>>> type(dataset._data.slice(key, 1).to_pandas().to_dict("list")['input_ids'][0])
<class 'numpy.ndarray'>
>>> dataset._data.slice(key, 1).to_pydict()
{'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .'], 'sentence2': ['Referring to him as only " the witness " , Amrozi accused his brother of deliberately distorting his evidence .'], 'label': [1], 'idx': [0], 'input_ids': [[101, 7277, 2180, 5303, 4806, 1117, 1711, 117, 2292, 1119, 1270, 107, 1103, 7737, 107, 117, 1104, 9938, 4267, 12223, 21811, 1117, 2554, 119, 102]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}
``` | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/387/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/387/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/386 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/386/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/386/comments | https://api.github.com/repos/huggingface/datasets/issues/386/events | https://github.com/huggingface/datasets/pull/386 | 655,839,067 | MDExOlB1bGxSZXF1ZXN0NDQ4MjQ1NDI4 | 386 | Update dataset loading and features - Add TREC dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [] | closed | false | null | [] | null | 1 | "2020-07-13T13:10:18Z" | "2020-07-16T08:17:58Z" | "2020-07-16T08:17:58Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/386.diff",
"html_url": "https://github.com/huggingface/datasets/pull/386",
"merged_at": "2020-07-16T08:17:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/386.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/386"
} | This PR:
- add a template for a new dataset script
- update the caching structure so that the path to the cached data files is also a function of the dataset loading script hash. This way when you update a loading script the data will be automatically updated instead of falling back to the previous version (which is usually a outdated). This makes it in particular easier to iterate when writing a new dataset loading script.
- fix a bug in the `ClassLabel` feature and make it more flexible so that its methods `str2int` and `int2str` can also accept list, numpy arrays and PyTorch/TensorFlow tensors.
- add the TREC-6 dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/386/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/386/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/385 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/385/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/385/comments | https://api.github.com/repos/huggingface/datasets/issues/385/events | https://github.com/huggingface/datasets/pull/385 | 655,663,997 | MDExOlB1bGxSZXF1ZXN0NDQ4MTAzMjY5 | 385 | Remove unnecessary nested dict | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | 5 | "2020-07-13T08:46:23Z" | "2020-07-15T11:27:38Z" | "2020-07-15T10:03:53Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/385.diff",
"html_url": "https://github.com/huggingface/datasets/pull/385",
"merged_at": "2020-07-15T10:03:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/385.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/385"
} | This PR is removing unnecessary nested dictionary used in some datasets. For now the following datasets are updated:
- MLQA
- RACE
Will be adding more if necessary.
#378 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/385/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/385/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/383 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/383/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/383/comments | https://api.github.com/repos/huggingface/datasets/issues/383/events | https://github.com/huggingface/datasets/pull/383 | 655,291,201 | MDExOlB1bGxSZXF1ZXN0NDQ3ODI0OTky | 383 | Adding the Linguistic Code-switching Evaluation (LinCE) benchmark | {
"avatar_url": "https://avatars.githubusercontent.com/u/5833357?v=4",
"events_url": "https://api.github.com/users/gaguilar/events{/privacy}",
"followers_url": "https://api.github.com/users/gaguilar/followers",
"following_url": "https://api.github.com/users/gaguilar/following{/other_user}",
"gists_url": "https://api.github.com/users/gaguilar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gaguilar",
"id": 5833357,
"login": "gaguilar",
"node_id": "MDQ6VXNlcjU4MzMzNTc=",
"organizations_url": "https://api.github.com/users/gaguilar/orgs",
"received_events_url": "https://api.github.com/users/gaguilar/received_events",
"repos_url": "https://api.github.com/users/gaguilar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gaguilar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gaguilar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gaguilar"
} | [] | closed | false | null | [] | null | 5 | "2020-07-11T22:35:20Z" | "2020-07-16T16:19:46Z" | "2020-07-16T16:19:46Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/383.diff",
"html_url": "https://github.com/huggingface/datasets/pull/383",
"merged_at": "2020-07-16T16:19:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/383.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/383"
} | Hi,
First of all, this library is really cool! Thanks for putting all of this together!
This PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https://ritual.uh.edu/lince). As described in the official website (FAQ):
> 1. Why do we need LinCE?
>LinCE brings 10 code-switching datasets together for 4 tasks and 4 language pairs with 5 leaderboards in a single evaluation platform. We examined each dataset and fixed major issues on the partitions (or even define official partitions) with a comprehensive stratification method (see our paper for more details).
>Besides, we believe that online benchmarks like LinCE bring steady research progress and allow to compare state-of-the-art models at the pace of the progress in NLP. We expect to benefit greatly the code-switching community with this benchmark.
The data comes from social media and here's the summary table of tasks per language pair:
| Language Pairs | LID | POS | NER | SA |
|----------------------------------------|-----|-----|-----|----|
| Spanish-English | ✅ | ✅ | ✅ | ✅ |
| Hindi-English | ✅ | ✅ | ✅ | |
| Modern Standard Arabic-Egyptian Arabic | ✅ | | ✅ | |
| Nepali-English | ✅ | | | |
The tasks are as follows:
* LID: token-level language identification
* POS: part-of-speech tagging
* NER: named entity recognition
* SA: sentiment analysis
With the exception of MSA-EA, the rest of the datasets contain token-level LID labels.
## Usage
For Spanish-English LID, we can load the data as follows:
```
import nlp
data = nlp.load_dataset('./datasets/lince/lince.py', 'lid_spaeng')
for split in data:
print(data[split])
```
Here's the output:
```
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 21030)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 3332)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 8289)
```
Here's the list of shortcut names for every dataset available in LinCE:
* `lid_spaeng`
* `lid_hineng`
* `lid_nepeng`
* `lid_msaea`
* `pos_spaeng`
* `pos_hineng`
* `ner_spaeng`
* `ner_hineng`
* `ner_msaea`
* `sa_spaeng`
All the numbers match with Table 3 in the LinCE [paper](https://www.aclweb.org/anthology/2020.lrec-1.223.pdf). Also, note that the MSA-EA datasets use the Persian script while the other datasets use the Roman script.
## Features
Here is how the features look in the case of language identification (LID) tasks:
| LID Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
For part-of-speech (POS) tagging:
| POS Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `pos` | `list<str>` | List of POS tags (string) of a sentence |
For named entity recognition (NER):
| NER Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `ner` | `list<str>` | List of NER labels (string) of a sentence |
**NOTE**: the MSA-EA NER dataset does not contain the `lid` feature.
For sentiment analysis (SA):
| SA Feature | Type | Description |
|---------------------|-------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `sa` | `str` | Sentiment label (string) of a sentence |
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/383/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/383/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/382 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/382/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/382/comments | https://api.github.com/repos/huggingface/datasets/issues/382/events | https://github.com/huggingface/datasets/issues/382 | 655,290,482 | MDU6SXNzdWU2NTUyOTA0ODI= | 382 | 1080 | {
"avatar_url": "https://avatars.githubusercontent.com/u/60942503?v=4",
"events_url": "https://api.github.com/users/saq194/events{/privacy}",
"followers_url": "https://api.github.com/users/saq194/followers",
"following_url": "https://api.github.com/users/saq194/following{/other_user}",
"gists_url": "https://api.github.com/users/saq194/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/saq194",
"id": 60942503,
"login": "saq194",
"node_id": "MDQ6VXNlcjYwOTQyNTAz",
"organizations_url": "https://api.github.com/users/saq194/orgs",
"received_events_url": "https://api.github.com/users/saq194/received_events",
"repos_url": "https://api.github.com/users/saq194/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/saq194/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saq194/subscriptions",
"type": "User",
"url": "https://api.github.com/users/saq194"
} | [] | closed | false | null | [] | null | 0 | "2020-07-11T22:29:07Z" | "2020-07-11T22:49:38Z" | "2020-07-11T22:49:38Z" | NONE | null | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/382/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/382/timeline | null | completed | false |
|
https://api.github.com/repos/huggingface/datasets/issues/381 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/381/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/381/comments | https://api.github.com/repos/huggingface/datasets/issues/381/events | https://github.com/huggingface/datasets/issues/381 | 655,277,119 | MDU6SXNzdWU2NTUyNzcxMTk= | 381 | NLp | {
"avatar_url": "https://avatars.githubusercontent.com/u/68147610?v=4",
"events_url": "https://api.github.com/users/Spartanthor/events{/privacy}",
"followers_url": "https://api.github.com/users/Spartanthor/followers",
"following_url": "https://api.github.com/users/Spartanthor/following{/other_user}",
"gists_url": "https://api.github.com/users/Spartanthor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Spartanthor",
"id": 68147610,
"login": "Spartanthor",
"node_id": "MDQ6VXNlcjY4MTQ3NjEw",
"organizations_url": "https://api.github.com/users/Spartanthor/orgs",
"received_events_url": "https://api.github.com/users/Spartanthor/received_events",
"repos_url": "https://api.github.com/users/Spartanthor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Spartanthor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Spartanthor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Spartanthor"
} | [] | closed | false | null | [] | null | 0 | "2020-07-11T20:50:14Z" | "2020-07-11T20:50:39Z" | "2020-07-11T20:50:39Z" | NONE | null | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/381/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/381/timeline | null | completed | false |
|
https://api.github.com/repos/huggingface/datasets/issues/378 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/378/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/378/comments | https://api.github.com/repos/huggingface/datasets/issues/378/events | https://github.com/huggingface/datasets/issues/378 | 655,226,316 | MDU6SXNzdWU2NTUyMjYzMTY= | 378 | [dataset] Structure of MLQA seems unecessary nested | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [] | closed | false | null | [] | null | 2 | "2020-07-11T15:16:08Z" | "2020-07-15T16:17:20Z" | "2020-07-15T16:17:20Z" | MEMBER | null | null | null | The features of the MLQA dataset comprise several nested dictionaries with a single element inside (for `questions` and `ids`): https://github.com/huggingface/nlp/blob/master/datasets/mlqa/mlqa.py#L90-L97
Should we keep this @mariamabarham @patrickvonplaten? Was this added for compatibility with tfds?
```python
features=nlp.Features(
{
"context": nlp.Value("string"),
"questions": nlp.features.Sequence({"question": nlp.Value("string")}),
"answers": nlp.features.Sequence(
{"text": nlp.Value("string"), "answer_start": nlp.Value("int32"),}
),
"ids": nlp.features.Sequence({"idx": nlp.Value("string")})
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/378/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/378/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/377 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/377/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/377/comments | https://api.github.com/repos/huggingface/datasets/issues/377/events | https://github.com/huggingface/datasets/issues/377 | 655,215,790 | MDU6SXNzdWU2NTUyMTU3OTA= | 377 | Iyy!!! | {
"avatar_url": "https://avatars.githubusercontent.com/u/68154535?v=4",
"events_url": "https://api.github.com/users/ajinomoh/events{/privacy}",
"followers_url": "https://api.github.com/users/ajinomoh/followers",
"following_url": "https://api.github.com/users/ajinomoh/following{/other_user}",
"gists_url": "https://api.github.com/users/ajinomoh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ajinomoh",
"id": 68154535,
"login": "ajinomoh",
"node_id": "MDQ6VXNlcjY4MTU0NTM1",
"organizations_url": "https://api.github.com/users/ajinomoh/orgs",
"received_events_url": "https://api.github.com/users/ajinomoh/received_events",
"repos_url": "https://api.github.com/users/ajinomoh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ajinomoh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ajinomoh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ajinomoh"
} | [] | closed | false | null | [] | null | 0 | "2020-07-11T14:11:07Z" | "2020-07-11T14:30:51Z" | "2020-07-11T14:30:51Z" | NONE | null | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/377/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/377/timeline | null | completed | false |
|
https://api.github.com/repos/huggingface/datasets/issues/376 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/376/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/376/comments | https://api.github.com/repos/huggingface/datasets/issues/376/events | https://github.com/huggingface/datasets/issues/376 | 655,047,826 | MDU6SXNzdWU2NTUwNDc4MjY= | 376 | to_pandas conversion doesn't always work | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [] | closed | false | null | [] | null | 2 | "2020-07-10T21:33:31Z" | "2022-10-04T18:05:39Z" | "2022-10-04T18:05:39Z" | MEMBER | null | null | null | For some complex nested types, the conversion from Arrow to python dict through pandas doesn't seem to be possible.
Here is an example using the official SQUAD v2 JSON file.
This example was found while investigating #373.
```python
>>> squad = load_dataset('json', data_files={nlp.Split.TRAIN: ["./train-v2.0.json"]}, download_mode=nlp.GenerateMode.FORCE_REDOWNLOAD, version="1.0.0", field='data')
>>> squad['train']
Dataset(schema: {'title': 'string', 'paragraphs': 'list<item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>>'}, num_rows: 442)
>>> squad['train'][0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/thomwolf/Documents/GitHub/datasets/src/nlp/arrow_dataset.py", line 589, in __getitem__
format_kwargs=self._format_kwargs,
File "/Users/thomwolf/Documents/GitHub/datasets/src/nlp/arrow_dataset.py", line 529, in _getitem
outputs = self._unnest(self._data.slice(key, 1).to_pandas().to_dict("list"))
File "pyarrow/array.pxi", line 559, in pyarrow.lib._PandasConvertible.to_pandas
File "pyarrow/table.pxi", line 1367, in pyarrow.lib.Table._to_pandas
File "/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/pandas_compat.py", line 766, in table_to_blockmanager
blocks = _table_to_blocks(options, table, categories, ext_columns_dtypes)
File "/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/pandas_compat.py", line 1101, in _table_to_blocks
list(extension_columns.keys()))
File "pyarrow/table.pxi", line 881, in pyarrow.lib.table_to_blocks
File "pyarrow/error.pxi", line 105, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Not implemented type for Arrow list to pandas: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>
```
cc @lhoestq would we have a way to detect this from the schema maybe?
Here is the schema for this pretty complex JSON:
```python
>>> squad['train'].schema
title: string
paragraphs: list<item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>>
child 0, item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>
child 0, qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>
child 0, item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>
child 0, question: string
child 1, id: string
child 2, answers: list<item: struct<text: string, answer_start: int64>>
child 0, item: struct<text: string, answer_start: int64>
child 0, text: string
child 1, answer_start: int64
child 3, is_impossible: bool
child 4, plausible_answers: list<item: struct<text: string, answer_start: int64>>
child 0, item: struct<text: string, answer_start: int64>
child 0, text: string
child 1, answer_start: int64
child 1, context: string
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/376/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/376/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/375 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/375/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/375/comments | https://api.github.com/repos/huggingface/datasets/issues/375/events | https://github.com/huggingface/datasets/issues/375 | 655,023,307 | MDU6SXNzdWU2NTUwMjMzMDc= | 375 | TypeError when computing bertscore | {
"avatar_url": "https://avatars.githubusercontent.com/u/13269577?v=4",
"events_url": "https://api.github.com/users/willywsm1013/events{/privacy}",
"followers_url": "https://api.github.com/users/willywsm1013/followers",
"following_url": "https://api.github.com/users/willywsm1013/following{/other_user}",
"gists_url": "https://api.github.com/users/willywsm1013/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/willywsm1013",
"id": 13269577,
"login": "willywsm1013",
"node_id": "MDQ6VXNlcjEzMjY5NTc3",
"organizations_url": "https://api.github.com/users/willywsm1013/orgs",
"received_events_url": "https://api.github.com/users/willywsm1013/received_events",
"repos_url": "https://api.github.com/users/willywsm1013/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/willywsm1013/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/willywsm1013/subscriptions",
"type": "User",
"url": "https://api.github.com/users/willywsm1013"
} | [] | closed | false | null | [] | null | 2 | "2020-07-10T20:37:44Z" | "2022-06-01T15:15:59Z" | "2022-06-01T15:15:59Z" | NONE | null | null | null | Hi,
I installed nlp 0.3.0 via pip, and my python version is 3.7.
When I tried to compute bertscore with the code:
```
import nlp
bertscore = nlp.load_metric('bertscore')
# load hyps and refs
...
print (bertscore.compute(hyps, refs, lang='en'))
```
I got the following error.
```
Traceback (most recent call last):
File "bert_score_evaluate.py", line 16, in <module>
print (bertscore.compute(hyps, refs, lang='en'))
File "/home/willywsm/anaconda3/envs/torcher/lib/python3.7/site-packages/nlp/metric.py", line 200, in compute
output = self._compute(predictions=predictions, references=references, **metrics_kwargs)
File "/home/willywsm/anaconda3/envs/torcher/lib/python3.7/site-packages/nlp/metrics/bertscore/fb176889831bf0ce995ed197edc94b2e9a83f647a869bb8c9477dbb2d04d0f08/bertscore.py", line 105, in _compute
hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline)
TypeError: get_hash() takes 3 positional arguments but 4 were given
```
It seems like there is something wrong with get_hash() function? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/375/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/375/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/374 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/374/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/374/comments | https://api.github.com/repos/huggingface/datasets/issues/374/events | https://github.com/huggingface/datasets/pull/374 | 654,895,066 | MDExOlB1bGxSZXF1ZXN0NDQ3NTMxMzUy | 374 | Add dataset post processing for faiss indexes | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 2 | "2020-07-10T16:25:59Z" | "2020-07-13T13:44:03Z" | "2020-07-13T13:44:01Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/374.diff",
"html_url": "https://github.com/huggingface/datasets/pull/374",
"merged_at": "2020-07-13T13:44:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/374.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/374"
} | # Post processing of datasets for faiss indexes
Now that we can have datasets with embeddings (see `wiki_pr` for example), we can allow users to load the dataset + get the Faiss index that comes with it to do nearest neighbors queries.
## Implementation proposition
- Faiss indexes have to be added to the `nlp.Dataset` object, and therefore it's in a different scope that what are doing the `_split_generators` and `_generate_examples` methods of `nlp.DatasetBuilder`. Therefore I added a new method for post processing of the `nlp.Dataset` object called `_post_process` (name could change)
- The role of `_post_process` is to apply dataset transforms (filter/map etc.) or indexing functions (add_faiss_index) to modify/enrich the `nlp.Dataset` object. It is not part of the `download_and_prepare` process (that is focused on arrow files creation) so the post processing is run inside the `as_dataset` method.
- `_post_process` can generate new files (cached files from dataset transforms or serialized faiss indexes) and their names are defined by `_post_processing_resources`
- as we know what are the post processing resources, we can download them automatically from google storage instead of computing them if they're available (as we do for arrow files)
I'd happy to discuss these choices !
## The `wiki_dpr` index
It takes 1h20 and ~7GB of memory to compute. The final index is 1.42GB and takes ~1.5GB of memory.
This is pretty cool given that a naive flat index would take 170GB of memory to store the 21M vectors of dim 768.
I couldn't use directly the Faiss `index_factory` as I needed to set the metric to inner product.
## Example of usage
```python
import nlp
dset = nlp.load_dataset(
"wiki_dpr",
"psgs_w100_with_nq_embeddings",
split="train",
with_index=True
)
print(len(dset), dset.list_indexes()) # (21015300, ['embeddings'])
```
(it also works with the dataset configuration without the embeddings because I added the index file in google storage for this one too)
## Demo
You can also check a demo on google colab that shows how to use it with the DPRQuestionEncoder from transformers:
https://colab.research.google.com/drive/1FakNU8W5EPMcWff7iP1H6REg3XSS0YLp?usp=sharing
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/374/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/374/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/373 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/373/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/373/comments | https://api.github.com/repos/huggingface/datasets/issues/373/events | https://github.com/huggingface/datasets/issues/373 | 654,845,133 | MDU6SXNzdWU2NTQ4NDUxMzM= | 373 | Segmentation fault when loading local JSON dataset as of #372 | {
"avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4",
"events_url": "https://api.github.com/users/vegarab/events{/privacy}",
"followers_url": "https://api.github.com/users/vegarab/followers",
"following_url": "https://api.github.com/users/vegarab/following{/other_user}",
"gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vegarab",
"id": 24683907,
"login": "vegarab",
"node_id": "MDQ6VXNlcjI0NjgzOTA3",
"organizations_url": "https://api.github.com/users/vegarab/orgs",
"received_events_url": "https://api.github.com/users/vegarab/received_events",
"repos_url": "https://api.github.com/users/vegarab/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vegarab/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vegarab"
} | [] | closed | false | null | [] | null | 11 | "2020-07-10T15:04:25Z" | "2022-10-04T18:05:47Z" | "2022-10-04T18:05:47Z" | CONTRIBUTOR | null | null | null | The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault.
```
dataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, field='data')
```
causes
```
Using custom data configuration default
Downloading and preparing dataset json/default (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/XXX/.cache/huggingface/datasets/json/default/0.0.0...
0 tables [00:00, ? tables/s]Segmentation fault (core dumped)
```
where `./datasets/train-v2.0.json` is downloaded directly from https://rajpurkar.github.io/SQuAD-explorer/.
This is consistent with other SQuAD-formatted JSON files.
When attempting to load the dataset again, I get the following:
```
Using custom data configuration default
Traceback (most recent call last):
File "dataloader.py", line 6, in <module>
'json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, field='data')
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/load.py", line 524, in load_dataset
save_infos=save_infos,
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 382, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/home/XXX/.conda/envs/torch/lib/python3.7/contextlib.py", line 112, in __enter__
return next(self.gen)
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 368, in incomplete_dir
os.makedirs(tmp_dir)
File "/home/XXX/.conda/envs/torch/lib/python3.7/os.py", line 223, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/home/XXX/.cache/huggingface/datasets/json/default/0.0.0.incomplete'
```
(Not sure if you wanted this in the previous issue #369 or not as it was closed.) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/373/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/373/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/372 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/372/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/372/comments | https://api.github.com/repos/huggingface/datasets/issues/372/events | https://github.com/huggingface/datasets/pull/372 | 654,774,420 | MDExOlB1bGxSZXF1ZXN0NDQ3NDMzNTA4 | 372 | Make the json script more flexible | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [] | closed | false | null | [] | null | 0 | "2020-07-10T13:15:15Z" | "2020-07-10T14:52:07Z" | "2020-07-10T14:52:06Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/372.diff",
"html_url": "https://github.com/huggingface/datasets/pull/372",
"merged_at": "2020-07-10T14:52:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/372.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/372"
} | Fix https://github.com/huggingface/nlp/issues/359
Fix https://github.com/huggingface/nlp/issues/369
JSON script now can accept JSON files containing a single dict with the records as a list in one attribute to the dict (previously it only accepted JSON files containing records as rows of dicts in the file).
In this case, you should indicate using `field=XXX` the name of the field in the JSON structure which contains the records you want to load. The records can be a dict of lists or a list of dicts.
E.g. to load the SQuAD dataset JSON (without using the `squad` specific dataset loading script), in which the data rows are in the `data` field of the JSON dict, you can do:
```python
from nlp import load_dataset
dataset = load_dataset('json', data_files='/PATH/TO/JSON', field='data')
``` | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/372/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/372/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/371 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/371/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/371/comments | https://api.github.com/repos/huggingface/datasets/issues/371/events | https://github.com/huggingface/datasets/pull/371 | 654,668,242 | MDExOlB1bGxSZXF1ZXN0NDQ3MzQ4NDgw | 371 | Fix cached file path for metrics with different config names | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 1 | "2020-07-10T10:02:24Z" | "2020-07-10T13:45:22Z" | "2020-07-10T13:45:20Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/371.diff",
"html_url": "https://github.com/huggingface/datasets/pull/371",
"merged_at": "2020-07-10T13:45:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/371.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/371"
} | The config name was not taken into account to build the cached file path.
It should fix #368 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/371/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/371/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/370 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/370/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/370/comments | https://api.github.com/repos/huggingface/datasets/issues/370/events | https://github.com/huggingface/datasets/pull/370 | 654,304,193 | MDExOlB1bGxSZXF1ZXN0NDQ3MDU3NTIw | 370 | Allow indexing Dataset via np.ndarray | {
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen"
} | [] | closed | false | null | [] | null | 1 | "2020-07-09T19:43:15Z" | "2020-07-10T14:05:44Z" | "2020-07-10T14:05:43Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/370.diff",
"html_url": "https://github.com/huggingface/datasets/pull/370",
"merged_at": "2020-07-10T14:05:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/370.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/370"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/370/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/370/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/369 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/369/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/369/comments | https://api.github.com/repos/huggingface/datasets/issues/369/events | https://github.com/huggingface/datasets/issues/369 | 654,186,890 | MDU6SXNzdWU2NTQxODY4OTA= | 369 | can't load local dataset: pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries | {
"avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4",
"events_url": "https://api.github.com/users/vegarab/events{/privacy}",
"followers_url": "https://api.github.com/users/vegarab/followers",
"following_url": "https://api.github.com/users/vegarab/following{/other_user}",
"gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vegarab",
"id": 24683907,
"login": "vegarab",
"node_id": "MDQ6VXNlcjI0NjgzOTA3",
"organizations_url": "https://api.github.com/users/vegarab/orgs",
"received_events_url": "https://api.github.com/users/vegarab/received_events",
"repos_url": "https://api.github.com/users/vegarab/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vegarab/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vegarab"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | 2 | "2020-07-09T16:16:53Z" | "2020-12-15T23:07:22Z" | "2020-07-10T14:52:06Z" | CONTRIBUTOR | null | null | null | Trying to load a local SQuAD-formatted dataset (from a JSON file, about 60MB):
```
dataset = nlp.load_dataset(path='json', data_files={nlp.Split.TRAIN: ["./path/to/file.json"]})
```
causes
```
Traceback (most recent call last):
File "dataloader.py", line 9, in <module>
["./path/to/file.json"]})
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/load.py", line 524, in load_dataset
save_infos=save_infos,
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 483, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 719, in _prepare_split
for key, table in utils.tqdm(generator, unit=" tables", leave=False):
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/datasets/json/88c1bc5c68489f7eda549ed05a5a738527c613b3e7a4ee3524d9d233353a949b/json.py", line 53, in _generate_tables
file, read_options=self.config.pa_read_options, parse_options=self.config.pa_parse_options,
File "pyarrow/_json.pyx", line 191, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)
```
I haven't been able to find any reports of this specific pyarrow error here or elsewhere. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/369/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/369/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/368 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/368/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/368/comments | https://api.github.com/repos/huggingface/datasets/issues/368/events | https://github.com/huggingface/datasets/issues/368 | 654,087,251 | MDU6SXNzdWU2NTQwODcyNTE= | 368 | load_metric can't acquire lock anymore | {
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ydshieh",
"id": 2521628,
"login": "ydshieh",
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ydshieh"
} | [] | closed | false | null | [] | null | 1 | "2020-07-09T14:04:09Z" | "2020-07-10T13:45:20Z" | "2020-07-10T13:45:20Z" | NONE | null | null | null | I can't load metric (glue) anymore after an error in a previous run. I even removed the whole cache folder `/home/XXX/.cache/huggingface/`, and the issue persisted. What are the steps to fix this?
Traceback (most recent call last):
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/metric.py", line 101, in __init__
self.filelock.acquire(timeout=1)
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/filelock.py", line 278, in acquire
raise Timeout(self._lock_file)
filelock.Timeout: The file lock '/home/XXX/.cache/huggingface/metrics/glue/1.0.0/1-glue-0.arrow.lock' could not be acquired.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "examples_huggingface_nlp.py", line 268, in <module>
main()
File "examples_huggingface_nlp.py", line 242, in main
dataset, metric = get_dataset_metric(glue_task)
File "examples_huggingface_nlp.py", line 77, in get_dataset_metric
metric = nlp.load_metric('glue', glue_config, experiment_id=1)
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/load.py", line 440, in load_metric
**metric_init_kwargs,
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/metric.py", line 104, in __init__
"Cannot acquire lock, caching file might be used by another process, "
ValueError: Cannot acquire lock, caching file might be used by another process, you should setup a unique 'experiment_id' for this run.
I0709 15:54:41.008838 139854118430464 filelock.py:318] Lock 139852058030936 released on /home/XXX/.cache/huggingface/metrics/glue/1.0.0/1-glue-0.arrow.lock
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/368/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/368/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/367 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/367/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/367/comments | https://api.github.com/repos/huggingface/datasets/issues/367/events | https://github.com/huggingface/datasets/pull/367 | 654,012,984 | MDExOlB1bGxSZXF1ZXN0NDQ2ODIxNTAz | 367 | Update Xtreme to add PAWS-X es | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | 0 | "2020-07-09T12:14:37Z" | "2020-07-09T12:37:11Z" | "2020-07-09T12:37:10Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/367.diff",
"html_url": "https://github.com/huggingface/datasets/pull/367",
"merged_at": "2020-07-09T12:37:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/367.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/367"
} | This PR adds the `PAWS-X.es` in the Xtreme dataset #362 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/367/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/367/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/366 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/366/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/366/comments | https://api.github.com/repos/huggingface/datasets/issues/366/events | https://github.com/huggingface/datasets/pull/366 | 653,954,896 | MDExOlB1bGxSZXF1ZXN0NDQ2NzcyODE2 | 366 | Add quora dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghomasHudson",
"id": 13795113,
"login": "ghomasHudson",
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghomasHudson"
} | [] | closed | false | null | [] | null | 2 | "2020-07-09T10:34:22Z" | "2020-07-13T17:35:21Z" | "2020-07-13T17:35:21Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/366.diff",
"html_url": "https://github.com/huggingface/datasets/pull/366",
"merged_at": "2020-07-13T17:35:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/366.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/366"
} | Added the [Quora question pairs dataset](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs).
Implementation Notes:
- I used the original version provided on the quora website. There's also a [Kaggle competition](https://www.kaggle.com/c/quora-question-pairs) which has a nice train/test split but I can't find an easy way to download it.
- I've made the questions into a list:
```python
{
"questions": [
{"id":0, "text": "Is this an example question?"},
{"id":1, "text": "Is this a sample question?"},
],
...
}
```
rather than:
```python
{
"question1": "Is this an example question?",
"question2": "Is this a sample question?"
"qid0": 0
"qid1": 1
...
}
```
Not sure if this was the right call.
- Can't find a good citation for this dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/366/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/366/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/365 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/365/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/365/comments | https://api.github.com/repos/huggingface/datasets/issues/365/events | https://github.com/huggingface/datasets/issues/365 | 653,845,964 | MDU6SXNzdWU2NTM4NDU5NjQ= | 365 | How to augment data ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/astariul",
"id": 43774355,
"login": "astariul",
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"repos_url": "https://api.github.com/users/astariul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/astariul"
} | [] | closed | false | null | [] | null | 6 | "2020-07-09T07:52:37Z" | "2020-07-10T09:12:07Z" | "2020-07-10T08:22:15Z" | NONE | null | null | null | Is there any clean way to augment data ?
For now my work-around is to use batched map, like this :
```python
def aug(samples):
# Simply copy the existing data to have x2 amount of data
for k, v in samples.items():
samples[k].extend(v)
return samples
dataset = dataset.map(aug, batched=True)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/365/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/365/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/364 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/364/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/364/comments | https://api.github.com/repos/huggingface/datasets/issues/364/events | https://github.com/huggingface/datasets/pull/364 | 653,821,597 | MDExOlB1bGxSZXF1ZXN0NDQ2NjY0NzM5 | 364 | add MS MARCO dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | 7 | "2020-07-09T07:11:19Z" | "2020-08-06T06:15:49Z" | "2020-08-06T06:15:48Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/364.diff",
"html_url": "https://github.com/huggingface/datasets/pull/364",
"merged_at": "2020-08-06T06:15:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/364.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/364"
} | This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including:
- Passage and Document Retrieval
- Keyphrase Extraction
- QA and NLG
This PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https://arxiv.org/pdf/1611.09268.pdf
Tests are failing because of the dummy data. I tried to fix it without success. Can you please have a look at it? @patrickvonplaten , @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/364/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/364/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/363 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/363/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/363/comments | https://api.github.com/repos/huggingface/datasets/issues/363/events | https://github.com/huggingface/datasets/pull/363 | 653,821,172 | MDExOlB1bGxSZXF1ZXN0NDQ2NjY0NDIy | 363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/14030663?v=4",
"events_url": "https://api.github.com/users/eltoto1219/events{/privacy}",
"followers_url": "https://api.github.com/users/eltoto1219/followers",
"following_url": "https://api.github.com/users/eltoto1219/following{/other_user}",
"gists_url": "https://api.github.com/users/eltoto1219/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/eltoto1219",
"id": 14030663,
"login": "eltoto1219",
"node_id": "MDQ6VXNlcjE0MDMwNjYz",
"organizations_url": "https://api.github.com/users/eltoto1219/orgs",
"received_events_url": "https://api.github.com/users/eltoto1219/received_events",
"repos_url": "https://api.github.com/users/eltoto1219/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/eltoto1219/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eltoto1219/subscriptions",
"type": "User",
"url": "https://api.github.com/users/eltoto1219"
} | [] | closed | false | null | [] | null | 23 | "2020-07-09T07:10:30Z" | "2020-08-24T09:59:35Z" | "2020-08-24T09:59:35Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/363.diff",
"html_url": "https://github.com/huggingface/datasets/pull/363",
"merged_at": "2020-08-24T09:59:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/363.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/363"
} | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/363/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/363/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/362 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/362/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/362/comments | https://api.github.com/repos/huggingface/datasets/issues/362/events | https://github.com/huggingface/datasets/issues/362 | 653,766,245 | MDU6SXNzdWU2NTM3NjYyNDU= | 362 | [dateset subset missing] xtreme paws-x | {
"avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4",
"events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}",
"followers_url": "https://api.github.com/users/jerryIsHere/followers",
"following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}",
"gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jerryIsHere",
"id": 50871412,
"login": "jerryIsHere",
"node_id": "MDQ6VXNlcjUwODcxNDEy",
"organizations_url": "https://api.github.com/users/jerryIsHere/orgs",
"received_events_url": "https://api.github.com/users/jerryIsHere/received_events",
"repos_url": "https://api.github.com/users/jerryIsHere/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jerryIsHere"
} | [] | closed | false | null | [] | null | 1 | "2020-07-09T05:04:54Z" | "2020-07-09T12:38:42Z" | "2020-07-09T12:38:42Z" | CONTRIBUTOR | null | null | null | I tried nlp.load_dataset('xtreme', 'PAWS-X.es') but get the value error
It turns out that the subset for Spanish is missing
https://github.com/google-research-datasets/paws/tree/master/pawsx | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/362/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/362/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/361 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/361/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/361/comments | https://api.github.com/repos/huggingface/datasets/issues/361/events | https://github.com/huggingface/datasets/issues/361 | 653,757,376 | MDU6SXNzdWU2NTM3NTczNzY= | 361 | 🐛 [Metrics] ROUGE is non-deterministic | {
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/astariul",
"id": 43774355,
"login": "astariul",
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"repos_url": "https://api.github.com/users/astariul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/astariul"
} | [] | closed | false | null | [] | null | 8 | "2020-07-09T04:39:37Z" | "2022-09-09T15:20:55Z" | "2020-07-20T23:48:37Z" | NONE | null | null | null | If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different.
Refer to [this Colab notebook](https://colab.research.google.com/drive/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem.
Example of F-score for ROUGE-1, ROUGE-2, ROUGE-L in 2 differents run :
> ['0.3350', '0.1470', '0.2329']
['0.3358', '0.1451', '0.2332']
---
Why ROUGE is not deterministic ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/361/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/361/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/360 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/360/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/360/comments | https://api.github.com/repos/huggingface/datasets/issues/360/events | https://github.com/huggingface/datasets/issues/360 | 653,687,176 | MDU6SXNzdWU2NTM2ODcxNzY= | 360 | [Feature request] Add dataset.ragged_map() function for many-to-many transformations | {
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen"
} | [] | closed | false | null | [] | null | 2 | "2020-07-09T01:04:43Z" | "2020-07-09T19:31:51Z" | "2020-07-09T19:31:51Z" | CONTRIBUTOR | null | null | null | `dataset.map()` enables one-to-one transformations. Input one example and output one example. This is helpful for tokenizing and cleaning individual lines.
`dataset.filter()` enables one-to-(one-or-none) transformations. Input one example and output either zero/one example. This is helpful for removing portions from the dataset.
However, some dataset transformations are many-to-many. Consider constructing BERT training examples from a dataset of sentences, where you map `["a", "b", "c"] -> ["a[SEP]b", "a[SEP]c", "b[SEP]c", "c[SEP]b", ...]`
I propose a more general `ragged_map()` method that takes in a batch of examples of length `N` and return a batch of examples `M`. This is different from the `map(batched=True)` method, which takes examples of length `N` and returns a batch of length `N`, processing individual examples in parallel. I don't have a clear vision of how this would be implemented efficiently and lazily, but would love to hear the community's feedback on this.
My specific use case is creating an end-to-end ELECTRA data pipeline. I would like to take the raw WikiText data and generate training examples from this using the `ragged_map()` method, then export to TFRecords and train quickly. This would be a reproducible pipeline with no bash scripts. Currently I'm relying on scripts like https://github.com/google-research/electra/blob/master/build_pretraining_dataset.py, which are less general.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/360/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/360/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/359 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/359/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/359/comments | https://api.github.com/repos/huggingface/datasets/issues/359/events | https://github.com/huggingface/datasets/issues/359 | 653,656,279 | MDU6SXNzdWU2NTM2NTYyNzk= | 359 | ArrowBasedBuilder _prepare_split parse_schema breaks on nested structures | {
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",
"gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/timothyjlaurent",
"id": 2000204,
"login": "timothyjlaurent",
"node_id": "MDQ6VXNlcjIwMDAyMDQ=",
"organizations_url": "https://api.github.com/users/timothyjlaurent/orgs",
"received_events_url": "https://api.github.com/users/timothyjlaurent/received_events",
"repos_url": "https://api.github.com/users/timothyjlaurent/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions",
"type": "User",
"url": "https://api.github.com/users/timothyjlaurent"
} | [] | closed | false | null | [] | null | 4 | "2020-07-08T23:24:05Z" | "2020-07-10T14:52:06Z" | "2020-07-10T14:52:06Z" | NONE | null | null | null | I tried using the Json dataloader to load some JSON lines files. but get an exception in the parse_schema function.
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-23-9aecfbee53bd> in <module>
55 from nlp import load_dataset
56
---> 57 ds = load_dataset("../text2struct/model/dataset_builder.py", data_files=rel_datafiles)
58
59
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
522 download_mode=download_mode,
523 ignore_verifications=ignore_verifications,
--> 524 save_infos=save_infos,
525 )
526
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
430 verify_infos = not save_infos and not ignore_verifications
431 self._download_and_prepare(
--> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
433 )
434 # Sync info
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
481 try:
482 # Prepare split will record examples associated to the split
--> 483 self._prepare_split(split_generator, **prepare_split_kwargs)
484 except OSError:
485 raise OSError("Cannot find data file. " + (self.manual_download_instructions or ""))
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _prepare_split(self, split_generator)
736 schema_dict[field.name] = Value(str(field.type))
737
--> 738 parse_schema(writer.schema, features)
739 self.info.features = Features(features)
740
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in parse_schema(schema, schema_dict)
734 parse_schema(field.type.value_type, schema_dict[field.name])
735 else:
--> 736 schema_dict[field.name] = Value(str(field.type))
737
738 parse_schema(writer.schema, features)
<string> in __init__(self, dtype, id, _type)
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in __post_init__(self)
55
56 def __post_init__(self):
---> 57 self.pa_type = string_to_arrow(self.dtype)
58
59 def __call__(self):
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in string_to_arrow(type_str)
32 if str(type_str + "_") not in pa.__dict__:
33 raise ValueError(
---> 34 f"Neither {type_str} nor {type_str + '_'} seems to be a pyarrow data type. "
35 f"Please make sure to use a correct data type, see: "
36 f"https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions"
ValueError: Neither list<item: string> nor list<item: string>_ seems to be a pyarrow data type. Please make sure to use a correct data type, see: https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions
```
If I create the dataset imperatively, using a pyarrow table, the dataset is created correctly. If I override the `_prepare_split` method to avoid calling the validate schema, the dataset can load as well. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/359/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/359/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/358 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/358/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/358/comments | https://api.github.com/repos/huggingface/datasets/issues/358/events | https://github.com/huggingface/datasets/pull/358 | 653,645,121 | MDExOlB1bGxSZXF1ZXN0NDQ2NTI0NjQ5 | 358 | Starting to add some real doc | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [] | closed | false | null | [] | null | 1 | "2020-07-08T22:53:03Z" | "2020-07-14T09:58:17Z" | "2020-07-14T09:58:15Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/358.diff",
"html_url": "https://github.com/huggingface/datasets/pull/358",
"merged_at": "2020-07-14T09:58:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/358.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/358"
} | Adding a lot of documentation for:
- load a dataset
- explore the dataset object
- process data with the dataset
- add a new dataset script
- share a dataset script
- full package reference
This version of the doc can be explored here: https://2219-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html
Also:
- fix a bug in `train_test_split`
- update the `csv` script
- add a verbose argument to the dataset processing methods
Still missing:
- doc for the metrics
- how to directly upload a community provided dataset with the CLI
- clean up more docstrings
- add the `features` argument to `load_dataset` (should be another PR) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/358/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/358/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/357 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/357/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/357/comments | https://api.github.com/repos/huggingface/datasets/issues/357/events | https://github.com/huggingface/datasets/pull/357 | 653,642,292 | MDExOlB1bGxSZXF1ZXN0NDQ2NTIyMzU2 | 357 | Add hashes to cnn_dailymail | {
"avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4",
"events_url": "https://api.github.com/users/jbragg/events{/privacy}",
"followers_url": "https://api.github.com/users/jbragg/followers",
"following_url": "https://api.github.com/users/jbragg/following{/other_user}",
"gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jbragg",
"id": 2238344,
"login": "jbragg",
"node_id": "MDQ6VXNlcjIyMzgzNDQ=",
"organizations_url": "https://api.github.com/users/jbragg/orgs",
"received_events_url": "https://api.github.com/users/jbragg/received_events",
"repos_url": "https://api.github.com/users/jbragg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbragg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jbragg"
} | [] | closed | false | null | [] | null | 2 | "2020-07-08T22:45:21Z" | "2020-07-13T14:16:38Z" | "2020-07-13T14:16:38Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/357.diff",
"html_url": "https://github.com/huggingface/datasets/pull/357",
"merged_at": "2020-07-13T14:16:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/357.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/357"
} | The URL hashes are helpful for comparing results from other sources. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/357/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/357/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/356 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/356/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/356/comments | https://api.github.com/repos/huggingface/datasets/issues/356/events | https://github.com/huggingface/datasets/pull/356 | 653,537,388 | MDExOlB1bGxSZXF1ZXN0NDQ2NDM3MDQ5 | 356 | Add text dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen"
} | [] | closed | false | null | [] | null | 0 | "2020-07-08T19:21:53Z" | "2020-07-10T14:19:03Z" | "2020-07-10T14:19:03Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/356.diff",
"html_url": "https://github.com/huggingface/datasets/pull/356",
"merged_at": "2020-07-10T14:19:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/356.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/356"
} | Usage:
```python
from nlp import load_dataset
dset = load_dataset("text", data_files="/path/to/file.txt")["train"]
```
I created a dummy_data.zip which contains three files: `train.txt`, `test.txt`, `dev.txt`. Each of these contains two lines. It passes
```bash
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_text
```
but I would like a second set of eyes to ensure I did it right.
| {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 3,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/356/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/356/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/355 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/355/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/355/comments | https://api.github.com/repos/huggingface/datasets/issues/355/events | https://github.com/huggingface/datasets/issues/355 | 653,451,013 | MDU6SXNzdWU2NTM0NTEwMTM= | 355 | can't load SNLI dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxmorris12",
"id": 13238952,
"login": "jxmorris12",
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxmorris12"
} | [] | closed | false | null | [] | null | 3 | "2020-07-08T16:54:14Z" | "2020-07-18T05:15:57Z" | "2020-07-15T07:59:01Z" | CONTRIBUTOR | null | null | null | `nlp` seems to load `snli` from some URL based on nlp.stanford.edu. This subdomain is frequently down -- including right now, when I'd like to load `snli` in a Colab notebook, but can't.
Is there a plan to move these datasets to huggingface servers for a more stable solution?
Btw, here's the stack trace:
```
File "/content/nlp/src/nlp/builder.py", line 432, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/content/nlp/src/nlp/builder.py", line 466, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/content/nlp/src/nlp/datasets/snli/e417f6f2e16254938d977a17ed32f3998f5b23e4fcab0f6eb1d28784f23ea60d/snli.py", line 76, in _split_generators
dl_dir = dl_manager.download_and_extract(_DATA_URL)
File "/content/nlp/src/nlp/utils/download_manager.py", line 217, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/content/nlp/src/nlp/utils/download_manager.py", line 156, in download
lambda url: cached_path(url, download_config=self._download_config,), url_or_urls,
File "/content/nlp/src/nlp/utils/py_utils.py", line 190, in map_nested
return function(data_struct)
File "/content/nlp/src/nlp/utils/download_manager.py", line 156, in <lambda>
lambda url: cached_path(url, download_config=self._download_config,), url_or_urls,
File "/content/nlp/src/nlp/utils/file_utils.py", line 198, in cached_path
local_files_only=download_config.local_files_only,
File "/content/nlp/src/nlp/utils/file_utils.py", line 356, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://nlp.stanford.edu/projects/snli/snli_1.0.zip
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/355/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/355/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/354 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/354/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/354/comments | https://api.github.com/repos/huggingface/datasets/issues/354/events | https://github.com/huggingface/datasets/pull/354 | 653,357,617 | MDExOlB1bGxSZXF1ZXN0NDQ2MjkyMTc4 | 354 | More faiss control | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 1 | "2020-07-08T14:45:20Z" | "2020-07-09T09:54:54Z" | "2020-07-09T09:54:51Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/354.diff",
"html_url": "https://github.com/huggingface/datasets/pull/354",
"merged_at": "2020-07-09T09:54:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/354.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/354"
} | Allow users to specify a faiss index they created themselves, as sometimes indexes can be composite for examples | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/354/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/354/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/353 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/353/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/353/comments | https://api.github.com/repos/huggingface/datasets/issues/353/events | https://github.com/huggingface/datasets/issues/353 | 653,250,611 | MDU6SXNzdWU2NTMyNTA2MTE= | 353 | [Dataset requests] New datasets for Text Classification | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [
{
"color": "008672",
"default": true,
"description": "Extra attention is needed",
"id": 1935892884,
"name": "help wanted",
"node_id": "MDU6TGFiZWwxOTM1ODkyODg0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted"
},
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | [] | null | 9 | "2020-07-08T12:17:58Z" | "2024-02-07T20:07:15Z" | null | MEMBER | null | null | null | We are missing a few datasets for Text Classification which is an important field.
Namely, it would be really nice to add:
- [x] TREC-6 dataset (see here for instance: https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html#torchnlp.datasets.trec_dataset) **[done]**
- #386
- [x] Yelp-5
- #1315
- [x] Movie review (Movie Review (MR) dataset [156]) **[done (same as rotten_tomatoes)]**
- [x] SST (Stanford Sentiment Treebank) **[include in glue]**
- #1934
- [ ] Multi-Perspective Question Answering (MPQA) dataset **[require authentication (indeed manual download)]**
- [x] Amazon. This is a popular corpus of product reviews collected from the Amazon website [159]. It contains labels for both binary classification and multi-class (5-class) classification
- #791
- #1389
- [x] 20 Newsgroups. The 20 Newsgroups dataset **[done]**
- #410
- [x] Sogou News dataset **[done]**
- #450
- [x] Reuters news. The Reuters-21578 dataset [165] **[done]**
- #471
- [x] DBpedia. The DBpedia dataset [170]
- #1116
- [ ] Ohsumed. The Ohsumed collection [171] is a subset of the MEDLINE database
- [ ] EUR-Lex. The EUR-Lex dataset
- [x] WOS. The Web Of Science (WOS) dataset **[done]**
- #424
- [ ] PubMed. PubMed [173]
- [x] TREC-QA: TREC-6 + TREC-50
- See above: TREC-6 dataset
- [x] Quora. The Quora dataset [180]
- #366
All these datasets are cited in https://arxiv.org/abs/2004.03705 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 3,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/353/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/353/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/352 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/352/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/352/comments | https://api.github.com/repos/huggingface/datasets/issues/352/events | https://github.com/huggingface/datasets/pull/352 | 653,128,883 | MDExOlB1bGxSZXF1ZXN0NDQ2MTA1Mjky | 352 | 🐛[BugFix]fix seqeval | {
"avatar_url": "https://avatars.githubusercontent.com/u/20281571?v=4",
"events_url": "https://api.github.com/users/AlongWY/events{/privacy}",
"followers_url": "https://api.github.com/users/AlongWY/followers",
"following_url": "https://api.github.com/users/AlongWY/following{/other_user}",
"gists_url": "https://api.github.com/users/AlongWY/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AlongWY",
"id": 20281571,
"login": "AlongWY",
"node_id": "MDQ6VXNlcjIwMjgxNTcx",
"organizations_url": "https://api.github.com/users/AlongWY/orgs",
"received_events_url": "https://api.github.com/users/AlongWY/received_events",
"repos_url": "https://api.github.com/users/AlongWY/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AlongWY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlongWY/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AlongWY"
} | [] | closed | false | null | [] | null | 7 | "2020-07-08T09:12:12Z" | "2020-07-16T08:26:46Z" | "2020-07-16T08:26:46Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/352.diff",
"html_url": "https://github.com/huggingface/datasets/pull/352",
"merged_at": "2020-07-16T08:26:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/352.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/352"
} | Fix seqeval process labels such as 'B', 'B-ARGM-LOC' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/352/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/352/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/351 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/351/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/351/comments | https://api.github.com/repos/huggingface/datasets/issues/351/events | https://github.com/huggingface/datasets/pull/351 | 652,424,048 | MDExOlB1bGxSZXF1ZXN0NDQ1NDk0NTE4 | 351 | add pandas dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-07-07T15:38:07Z" | "2020-07-08T14:15:16Z" | "2020-07-08T14:15:15Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/351.diff",
"html_url": "https://github.com/huggingface/datasets/pull/351",
"merged_at": "2020-07-08T14:15:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/351.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/351"
} | Create a dataset from serialized pandas dataframes.
Usage:
```python
from nlp import load_dataset
dset = load_dataset("pandas", data_files="df.pkl")["train"]
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/351/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/351/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/350 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/350/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/350/comments | https://api.github.com/repos/huggingface/datasets/issues/350/events | https://github.com/huggingface/datasets/pull/350 | 652,398,691 | MDExOlB1bGxSZXF1ZXN0NDQ1NDczODYz | 350 | add from_pandas and from_dict | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-07-07T15:03:53Z" | "2020-07-08T14:14:33Z" | "2020-07-08T14:14:32Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/350.diff",
"html_url": "https://github.com/huggingface/datasets/pull/350",
"merged_at": "2020-07-08T14:14:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/350.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/350"
} | I added two new methods to the `Dataset` class:
- `from_pandas()` to create a dataset from a pandas dataframe
- `from_dict()` to create a dataset from a dictionary (keys = columns)
It uses the `pa.Table.from_pandas` and `pa.Table.from_pydict` funcitons to do so.
It is also possible to specify the features types via `features=...` if there are ambiguities (null/nan values), otherwise the arrow schema is infered from the data automatically by pyarrow.
One question that I have right now:
+ Should we also add a `save()` method that would write the dataset on the disk ? Right now if we create a `Dataset` using those two new methods, the data are kept in RAM. Then to reload it we can call the `from_file()` method. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/350/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/350/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/349 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/349/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/349/comments | https://api.github.com/repos/huggingface/datasets/issues/349/events | https://github.com/huggingface/datasets/pull/349 | 652,231,571 | MDExOlB1bGxSZXF1ZXN0NDQ1MzQwMTQ1 | 349 | Hyperpartisan news detection | {
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghomasHudson",
"id": 13795113,
"login": "ghomasHudson",
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghomasHudson"
} | [] | closed | false | null | [] | null | 2 | "2020-07-07T11:06:37Z" | "2020-07-07T20:47:27Z" | "2020-07-07T14:57:11Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/349.diff",
"html_url": "https://github.com/huggingface/datasets/pull/349",
"merged_at": "2020-07-07T14:57:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/349.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/349"
} | Adding the hyperpartisan news detection dataset from PAN. This contains news article text, labelled with whether they're hyper-partisan and why kinds of biases they display.
Implementation notes:
- As with many PAN tasks, the data is hosted on [Zenodo](https://zenodo.org/record/1489920) and must be requested before use. I've used the manual download stuff for this, although the dataset is provided under a Creative Commons Attribution 4.0 International License, so we could host a version if we wanted to?
- The 'bias' attribute doesn't exist for the 'byarticle' configuration. I've added an empty string to the class labels to deal with this. Is there a more standard value for empty data?
- Should we always subclass `nlp.BuilderConfig`?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/349/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/349/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/348 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/348/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/348/comments | https://api.github.com/repos/huggingface/datasets/issues/348/events | https://github.com/huggingface/datasets/pull/348 | 652,158,308 | MDExOlB1bGxSZXF1ZXN0NDQ1MjgwNjk3 | 348 | Add OSCAR dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/635220?v=4",
"events_url": "https://api.github.com/users/pjox/events{/privacy}",
"followers_url": "https://api.github.com/users/pjox/followers",
"following_url": "https://api.github.com/users/pjox/following{/other_user}",
"gists_url": "https://api.github.com/users/pjox/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pjox",
"id": 635220,
"login": "pjox",
"node_id": "MDQ6VXNlcjYzNTIyMA==",
"organizations_url": "https://api.github.com/users/pjox/orgs",
"received_events_url": "https://api.github.com/users/pjox/received_events",
"repos_url": "https://api.github.com/users/pjox/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pjox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pjox/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pjox"
} | [] | closed | false | null | [] | null | 20 | "2020-07-07T09:22:07Z" | "2021-05-03T22:07:08Z" | "2021-02-09T10:19:19Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/348.diff",
"html_url": "https://github.com/huggingface/datasets/pull/348",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/348.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/348"
} | I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 4,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/348/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/348/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/347 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/347/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/347/comments | https://api.github.com/repos/huggingface/datasets/issues/347/events | https://github.com/huggingface/datasets/issues/347 | 652,106,567 | MDU6SXNzdWU2NTIxMDY1Njc= | 347 | 'cp950' codec error from load_dataset('xtreme', 'tydiqa') | {
"avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4",
"events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}",
"followers_url": "https://api.github.com/users/jerryIsHere/followers",
"following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}",
"gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jerryIsHere",
"id": 50871412,
"login": "jerryIsHere",
"node_id": "MDQ6VXNlcjUwODcxNDEy",
"organizations_url": "https://api.github.com/users/jerryIsHere/orgs",
"received_events_url": "https://api.github.com/users/jerryIsHere/received_events",
"repos_url": "https://api.github.com/users/jerryIsHere/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jerryIsHere"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | 10 | "2020-07-07T08:14:23Z" | "2020-09-07T14:51:45Z" | "2020-09-07T14:51:45Z" | CONTRIBUTOR | null | null | null | ![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png)
I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps :
https://www.python.org/dev/peps/pep-0263/
I guess the error was triggered by the code " module = importlib.import_module(module_path)" at line 57 in the source code: nlp/src/nlp/load.py / (https://github.com/huggingface/nlp/blob/911d5596f9b500e39af8642fe3d1b891758999c7/src/nlp/load.py#L51)
Any ideas?
p.s. tried the same code on colab, that runs perfectly
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/347/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/347/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/346 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/346/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/346/comments | https://api.github.com/repos/huggingface/datasets/issues/346/events | https://github.com/huggingface/datasets/pull/346 | 652,044,151 | MDExOlB1bGxSZXF1ZXN0NDQ1MTg4MTUz | 346 | Add emotion dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [] | closed | false | null | [] | null | 9 | "2020-07-07T06:35:41Z" | "2022-05-30T15:16:44Z" | "2020-07-13T14:39:38Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/346.diff",
"html_url": "https://github.com/huggingface/datasets/pull/346",
"merged_at": "2020-07-13T14:39:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/346.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/346"
} | Hello 🤗 team!
I am trying to add an emotion classification dataset ([link](https://github.com/dair-ai/emotion_dataset)) to `nlp` but I am a bit stuck about what I should do when the URL for the dataset is not a ZIP file, but just a pickled `pandas.DataFrame` (see [here](https://www.dropbox.com/s/607ptdakxuh5i4s/merged_training.pkl)).
With the current implementation, running
```bash
python nlp-cli test datasets/emotion --save_infos --all_configs
```
throws a `_pickle.UnpicklingError: invalid load key, '<'.` error (full stack trace below). The strange thing is that the path to the file does not carry the `.pkl` extension and instead appears to be some md5 hash (see the `FILE PATH` print statement in the stack trace).
Note: I have checked that the `merged_training.pkl` file is not corrupted when I download it with `wget`.
Any pointers on what I'm doing wrong would be greatly appreciated!
**Stack trace**
```
INFO:nlp.load:Checking datasets/emotion/emotion.py for additional imports.
INFO:filelock:Lock 140330435928512 acquired on datasets/emotion/emotion.py.lock
INFO:nlp.load:Found main folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion
INFO:nlp.load:Creating specific version folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b
INFO:nlp.load:Copying script file from datasets/emotion/emotion.py to /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py
INFO:nlp.load:Couldn't find dataset infos file at datasets/emotion/dataset_infos.json
INFO:nlp.load:Creating metadata file for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.json
INFO:filelock:Lock 140330435928512 released on datasets/emotion/emotion.py.lock
INFO:nlp.builder:Generating dataset emotion (/Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0)
INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source
Downloading and preparing dataset emotion/emotion (download: Unknown size, generated: Unknown size, total: Unknown size) to /Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0...
INFO:nlp.builder:Generating split train
0 examples [00:00, ? examples/s]FILE PATH /Users/lewtun/.cache/huggingface/datasets/3615dcb52b7ba052ef63e1571894c4b67e8e12a6ab1ef2f756ec3c380bf48490
Traceback (most recent call last):
File "nlp-cli", line 37, in <module>
service.run()
File "/Users/lewtun/git/nlp/src/nlp/commands/test.py", line 83, in run
builder.download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 431, in download_and_prepare
self._download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 483, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 664, in _prepare_split
for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
File "/Users/lewtun/miniconda3/envs/nlp/lib/python3.8/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py", line 87, in _generate_examples
data = pickle.load(f)
_pickle.UnpicklingError: invalid load key, '<'.
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/346/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/346/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/345 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/345/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/345/comments | https://api.github.com/repos/huggingface/datasets/issues/345/events | https://github.com/huggingface/datasets/issues/345 | 651,761,201 | MDU6SXNzdWU2NTE3NjEyMDE= | 345 | Supporting documents in ELI5 | {
"avatar_url": "https://avatars.githubusercontent.com/u/29262273?v=4",
"events_url": "https://api.github.com/users/saverymax/events{/privacy}",
"followers_url": "https://api.github.com/users/saverymax/followers",
"following_url": "https://api.github.com/users/saverymax/following{/other_user}",
"gists_url": "https://api.github.com/users/saverymax/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/saverymax",
"id": 29262273,
"login": "saverymax",
"node_id": "MDQ6VXNlcjI5MjYyMjcz",
"organizations_url": "https://api.github.com/users/saverymax/orgs",
"received_events_url": "https://api.github.com/users/saverymax/received_events",
"repos_url": "https://api.github.com/users/saverymax/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/saverymax/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saverymax/subscriptions",
"type": "User",
"url": "https://api.github.com/users/saverymax"
} | [] | closed | false | null | [] | null | 2 | "2020-07-06T19:14:13Z" | "2020-10-27T15:38:45Z" | "2020-10-27T15:38:45Z" | NONE | null | null | null | I was attempting to use the ELI5 dataset, when I realized that huggingface does not provide the supporting documents (the source documents from the common crawl). Without the supporting documents, this makes the dataset about as useful for my project as a block of cheese, or some other more apt metaphor. According to facebook, the entire document collection is quite large. However, it would still be helpful to at least include a subset of the supporting documents i.e., having some data is better than having a block of cheese, in my case at least.
If you choose not to include them, it would be helpful to have documentation mentioning this specifically. It is especially confusing because the hf nlp ELI5 dataset has the key `'document'` but there are no documents to be found :( | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/345/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/345/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/344 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/344/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/344/comments | https://api.github.com/repos/huggingface/datasets/issues/344/events | https://github.com/huggingface/datasets/pull/344 | 651,495,246 | MDExOlB1bGxSZXF1ZXN0NDQ0NzQwMTIw | 344 | Search qa | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | 1 | "2020-07-06T12:23:16Z" | "2020-07-16T08:58:16Z" | "2020-07-16T08:58:16Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/344.diff",
"html_url": "https://github.com/huggingface/datasets/pull/344",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/344.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/344"
} | This PR adds the Search QA dataset used in **SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine**. The dataset has the following config name:
- raw_jeopardy: raw data
- train_test_val: which is the splitted version
#336 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/344/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/344/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/343 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/343/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/343/comments | https://api.github.com/repos/huggingface/datasets/issues/343/events | https://github.com/huggingface/datasets/pull/343 | 651,419,630 | MDExOlB1bGxSZXF1ZXN0NDQ0Njc4NDEw | 343 | Fix nested tensorflow format | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-07-06T10:13:45Z" | "2020-07-06T13:11:52Z" | "2020-07-06T13:11:51Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/343.diff",
"html_url": "https://github.com/huggingface/datasets/pull/343",
"merged_at": "2020-07-06T13:11:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/343.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/343"
} | In #339 and #337 we are thinking about adding a way to export datasets to tfrecords.
However I noticed that it was not possible to do `dset.set_format("tensorflow")` on datasets with nested features like `squad`. I fixed that using a nested map operations to convert features to `tf.ragged.constant`.
I also added tests on the `set_format` function. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/343/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/343/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/342 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/342/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/342/comments | https://api.github.com/repos/huggingface/datasets/issues/342/events | https://github.com/huggingface/datasets/issues/342 | 651,333,194 | MDU6SXNzdWU2NTEzMzMxOTQ= | 342 | Features should be updated when `map()` changes schema | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [] | closed | false | null | [] | null | 1 | "2020-07-06T08:03:23Z" | "2020-07-23T10:15:16Z" | "2020-07-23T10:15:16Z" | MEMBER | null | null | null | `dataset.map()` can change the schema and column names.
We should update the features in this case (with what is possible to infer). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/342/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/342/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/341 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/341/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/341/comments | https://api.github.com/repos/huggingface/datasets/issues/341/events | https://github.com/huggingface/datasets/pull/341 | 650,611,969 | MDExOlB1bGxSZXF1ZXN0NDQ0MDcwMjEx | 341 | add fever dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | 0 | "2020-07-03T13:53:07Z" | "2020-07-06T13:03:48Z" | "2020-07-06T13:03:47Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/341.diff",
"html_url": "https://github.com/huggingface/datasets/pull/341",
"merged_at": "2020-07-06T13:03:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/341.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/341"
} | This PR add the FEVER dataset https://fever.ai/ used in with the paper: FEVER: a large-scale dataset for Fact Extraction and VERification (https://arxiv.org/pdf/1803.05355.pdf).
#336 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/341/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/341/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/340 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/340/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/340/comments | https://api.github.com/repos/huggingface/datasets/issues/340/events | https://github.com/huggingface/datasets/pull/340 | 650,533,920 | MDExOlB1bGxSZXF1ZXN0NDQ0MDA2Nzcy | 340 | Update cfq.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/4437290?v=4",
"events_url": "https://api.github.com/users/brainshawn/events{/privacy}",
"followers_url": "https://api.github.com/users/brainshawn/followers",
"following_url": "https://api.github.com/users/brainshawn/following{/other_user}",
"gists_url": "https://api.github.com/users/brainshawn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/brainshawn",
"id": 4437290,
"login": "brainshawn",
"node_id": "MDQ6VXNlcjQ0MzcyOTA=",
"organizations_url": "https://api.github.com/users/brainshawn/orgs",
"received_events_url": "https://api.github.com/users/brainshawn/received_events",
"repos_url": "https://api.github.com/users/brainshawn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/brainshawn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brainshawn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/brainshawn"
} | [] | closed | false | null | [] | null | 1 | "2020-07-03T11:23:19Z" | "2020-07-03T12:33:50Z" | "2020-07-03T12:33:50Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/340.diff",
"html_url": "https://github.com/huggingface/datasets/pull/340",
"merged_at": "2020-07-03T12:33:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/340.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/340"
} | Make the dataset name consistent with in the paper: Compositional Freebase Question => Compositional Freebase Questions. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/340/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/340/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/339 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/339/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/339/comments | https://api.github.com/repos/huggingface/datasets/issues/339/events | https://github.com/huggingface/datasets/pull/339 | 650,156,468 | MDExOlB1bGxSZXF1ZXN0NDQzNzAyNTcw | 339 | Add dataset.export() to TFRecords | {
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen"
} | [] | closed | false | null | [] | null | 18 | "2020-07-02T19:26:27Z" | "2020-07-22T09:16:12Z" | "2020-07-22T09:16:12Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/339.diff",
"html_url": "https://github.com/huggingface/datasets/pull/339",
"merged_at": "2020-07-22T09:16:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/339.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/339"
} | Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 3,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/339/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/339/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/338 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/338/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/338/comments | https://api.github.com/repos/huggingface/datasets/issues/338/events | https://github.com/huggingface/datasets/pull/338 | 650,057,253 | MDExOlB1bGxSZXF1ZXN0NDQzNjIxMTEx | 338 | Run `make style` | {
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen"
} | [] | closed | false | null | [] | null | 0 | "2020-07-02T16:19:47Z" | "2020-07-02T18:03:10Z" | "2020-07-02T18:03:10Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/338.diff",
"html_url": "https://github.com/huggingface/datasets/pull/338",
"merged_at": "2020-07-02T18:03:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/338.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/338"
} | These files get changed when I run `make style` on an unrelated PR. Upstreaming these changes so development on a different branch can be easier. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/338/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/338/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/337 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/337/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/337/comments | https://api.github.com/repos/huggingface/datasets/issues/337/events | https://github.com/huggingface/datasets/issues/337 | 650,035,887 | MDU6SXNzdWU2NTAwMzU4ODc= | 337 | [Feature request] Export Arrow dataset to TFRecords | {
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen"
} | [] | closed | false | null | [] | null | 0 | "2020-07-02T15:47:12Z" | "2020-07-22T09:16:12Z" | "2020-07-22T09:16:12Z" | CONTRIBUTOR | null | null | null | The TFRecord generation process is error-prone and requires complex separate Python scripts to download and preprocess the data. I propose to combine the user-friendly features of `nlp` with the speed and efficiency of TFRecords. Sample API:
```python
# use these existing methods
ds = load_dataset("wikitext", "wikitext-2-raw-v1", split="train")
ds = ds.map(lambda ex: tokenizer(ex))
ds.set_format("tensorflow", columns=["input_ids", "token_type_ids", "attention_mask"])
# then add this method
ds.export(folder="/my/tfrecords", prefix="myrecord", num_shards=8, format="tfrecord")
```
which would create files like so:
```bash
/my/tfrecords/myrecord_1.tfrecord
/my/tfrecords/myrecord_2.tfrecord
...
```
I would be happy to contribute this method. We could use a similar approach for PyTorch. Thoughts? | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/337/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/337/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/336 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/336/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/336/comments | https://api.github.com/repos/huggingface/datasets/issues/336/events | https://github.com/huggingface/datasets/issues/336 | 649,914,203 | MDU6SXNzdWU2NDk5MTQyMDM= | 336 | [Dataset requests] New datasets for Open Question Answering | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [
{
"color": "008672",
"default": true,
"description": "Extra attention is needed",
"id": 1935892884,
"name": "help wanted",
"node_id": "MDU6TGFiZWwxOTM1ODkyODg0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted"
},
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
}
] | null | 0 | "2020-07-02T13:03:03Z" | "2020-07-16T09:04:22Z" | "2020-07-16T09:04:22Z" | MEMBER | null | null | null | We are still a few datasets missing for Open-Question Answering which is currently a field in strong development.
Namely, it would be really nice to add:
- WebQuestions (Berant et al., 2013) [done]
- CuratedTrec (Baudis et al. 2015) [not open-source]
- MS-MARCO (NGuyen et al. 2016) [done]
- SearchQA (Dunn et al. 2017) [done]
- FEVER (Thorne et al. 2018) - [ done]
All these datasets are cited in http://arxiv.org/abs/2005.11401 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/336/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/336/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/335 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/335/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/335/comments | https://api.github.com/repos/huggingface/datasets/issues/335/events | https://github.com/huggingface/datasets/pull/335 | 649,765,179 | MDExOlB1bGxSZXF1ZXN0NDQzMzgwMjI1 | 335 | BioMRC Dataset presented in BioNLP 2020 ACL Workshop | {
"avatar_url": "https://avatars.githubusercontent.com/u/15162021?v=4",
"events_url": "https://api.github.com/users/PetrosStav/events{/privacy}",
"followers_url": "https://api.github.com/users/PetrosStav/followers",
"following_url": "https://api.github.com/users/PetrosStav/following{/other_user}",
"gists_url": "https://api.github.com/users/PetrosStav/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PetrosStav",
"id": 15162021,
"login": "PetrosStav",
"node_id": "MDQ6VXNlcjE1MTYyMDIx",
"organizations_url": "https://api.github.com/users/PetrosStav/orgs",
"received_events_url": "https://api.github.com/users/PetrosStav/received_events",
"repos_url": "https://api.github.com/users/PetrosStav/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PetrosStav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PetrosStav/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PetrosStav"
} | [] | closed | false | null | [] | null | 2 | "2020-07-02T09:03:41Z" | "2020-07-15T08:02:07Z" | "2020-07-15T08:02:07Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/335.diff",
"html_url": "https://github.com/huggingface/datasets/pull/335",
"merged_at": "2020-07-15T08:02:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/335.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/335"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/335/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/335/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/334 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/334/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/334/comments | https://api.github.com/repos/huggingface/datasets/issues/334/events | https://github.com/huggingface/datasets/pull/334 | 649,661,791 | MDExOlB1bGxSZXF1ZXN0NDQzMjk1NjQ0 | 334 | Add dataset.shard() method | {
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen"
} | [] | closed | false | null | [] | null | 1 | "2020-07-02T06:05:19Z" | "2020-07-06T12:35:36Z" | "2020-07-06T12:35:36Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/334.diff",
"html_url": "https://github.com/huggingface/datasets/pull/334",
"merged_at": "2020-07-06T12:35:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/334.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/334"
} | Fixes https://github.com/huggingface/nlp/issues/312 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/334/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/334/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/333 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/333/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/333/comments | https://api.github.com/repos/huggingface/datasets/issues/333/events | https://github.com/huggingface/datasets/pull/333 | 649,236,516 | MDExOlB1bGxSZXF1ZXN0NDQyOTE1NDQ0 | 333 | fix variable name typo | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | null | [] | null | 2 | "2020-07-01T19:13:50Z" | "2020-07-24T15:43:31Z" | "2020-07-24T08:32:16Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/333.diff",
"html_url": "https://github.com/huggingface/datasets/pull/333",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/333.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/333"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/333/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/333/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/332 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/332/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/332/comments | https://api.github.com/repos/huggingface/datasets/issues/332/events | https://github.com/huggingface/datasets/pull/332 | 649,140,135 | MDExOlB1bGxSZXF1ZXN0NDQyODMwMzMz | 332 | Add wiki_dpr | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 2 | "2020-07-01T17:12:00Z" | "2020-07-06T12:21:17Z" | "2020-07-06T12:21:16Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/332.diff",
"html_url": "https://github.com/huggingface/datasets/pull/332",
"merged_at": "2020-07-06T12:21:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/332.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/332"
} | Presented in the [Dense Passage Retrieval paper](https://arxiv.org/pdf/2004.04906.pdf), this dataset consists in 21M passages from the english wikipedia along with their 768-dim embeddings computed using DPR's context encoder.
Note on the implementation:
- There are two configs: with and without the embeddings (73GB vs 14GB)
- I used a non-fixed-size sequence of floats to describe the feature format of the embeddings. I wanted to use fixed-size sequences but I had issues with reading the arrow file afterwards (for example `dataset[0]` was crashing)
- I added the case for lists of urls as input of the download_manager | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/332/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/332/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/331 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/331/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/331/comments | https://api.github.com/repos/huggingface/datasets/issues/331/events | https://github.com/huggingface/datasets/issues/331 | 648,533,199 | MDU6SXNzdWU2NDg1MzMxOTk= | 331 | Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError` | {
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxmorris12",
"id": 13238952,
"login": "jxmorris12",
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxmorris12"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | 5 | "2020-06-30T22:21:33Z" | "2020-07-09T13:03:40Z" | "2020-07-09T13:03:40Z" | CONTRIBUTOR | null | null | null | ```
>>> import nlp
>>> nlp.load_dataset('cnn_dailymail', '3.0.0')
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/p/qdata/jm8wx/datasets/nlp/src/nlp/load.py", line 520, in load_dataset
builder_instance.download_and_prepare(
File "/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py", line 431, in download_and_prepare
self._download_and_prepare(
File "/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py", line 488, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/p/qdata/jm8wx/datasets/nlp/src/nlp/utils/info_utils.py", line 70, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}]
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/331/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/331/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/330 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/330/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/330/comments | https://api.github.com/repos/huggingface/datasets/issues/330/events | https://github.com/huggingface/datasets/pull/330 | 648,525,720 | MDExOlB1bGxSZXF1ZXN0NDQyMzIxMjEw | 330 | Doc red | {
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghomasHudson",
"id": 13795113,
"login": "ghomasHudson",
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghomasHudson"
} | [] | closed | false | null | [] | null | 0 | "2020-06-30T22:05:31Z" | "2020-07-06T12:10:39Z" | "2020-07-05T12:27:29Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/330.diff",
"html_url": "https://github.com/huggingface/datasets/pull/330",
"merged_at": "2020-07-05T12:27:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/330.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/330"
} | Adding [DocRED](https://github.com/thunlp/DocRED) - a relation extraction dataset which tests document-level RE. A few implementation notes:
- There are 2 separate versions of the training set - *annotated* and *distant*. Instead of `nlp.Split.Train` I've used the splits `"train_annotated"` and `"train_distant"` to reflect this.
- As well as the relation id, the full relation name is mapped from `rel_info.json`
- I renamed the 'h', 'r', 't' keys to 'head', 'relation' and 'tail' to make them more readable.
- Used the fix from #319 to allow nested sequences of dicts. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/330/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/330/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/329 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/329/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/329/comments | https://api.github.com/repos/huggingface/datasets/issues/329/events | https://github.com/huggingface/datasets/issues/329 | 648,446,979 | MDU6SXNzdWU2NDg0NDY5Nzk= | 329 | [Bug] FileLock dependency incompatible with filesystem | {
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen"
} | [] | closed | false | null | [] | null | 10 | "2020-06-30T19:45:31Z" | "2023-10-07T17:07:53Z" | "2020-06-30T21:33:06Z" | CONTRIBUTOR | null | null | null | I'm downloading a dataset successfully with
`load_dataset("wikitext", "wikitext-2-raw-v1")`
But when I attempt to cache it on an external volume, it hangs indefinitely:
`load_dataset("wikitext", "wikitext-2-raw-v1", cache_dir="/fsx") # /fsx is an external volume mount`
The filesystem when hanging looks like this:
```bash
/fsx
----downloads
----94be...73.lock
----wikitext
----wikitext-2-raw
----wikitext-2-raw-1.0.0.incomplete
```
It appears that on this filesystem, the FileLock object is forever stuck in its "acquire" stage. I have verified that the issue lies specifically with the `filelock` dependency:
```python
open("/fsx/hello.txt").write("hello") # succeeds
from filelock import FileLock
with FileLock("/fsx/hello.lock"):
open("/fsx/hello.txt").write("hello") # hangs indefinitely
```
Has anyone else run into this issue? I'd raise it directly on the FileLock repo, but that project appears abandoned with the last update over a year ago. Or if there's a solution that would remove the FileLock dependency from the project, I would appreciate that. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/329/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/329/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/328 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/328/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/328/comments | https://api.github.com/repos/huggingface/datasets/issues/328/events | https://github.com/huggingface/datasets/issues/328 | 648,326,841 | MDU6SXNzdWU2NDgzMjY4NDE= | 328 | Fork dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",
"gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/timothyjlaurent",
"id": 2000204,
"login": "timothyjlaurent",
"node_id": "MDQ6VXNlcjIwMDAyMDQ=",
"organizations_url": "https://api.github.com/users/timothyjlaurent/orgs",
"received_events_url": "https://api.github.com/users/timothyjlaurent/received_events",
"repos_url": "https://api.github.com/users/timothyjlaurent/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions",
"type": "User",
"url": "https://api.github.com/users/timothyjlaurent"
} | [] | closed | false | null | [] | null | 5 | "2020-06-30T16:42:53Z" | "2020-07-06T21:43:59Z" | "2020-07-06T21:43:59Z" | NONE | null | null | null | We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset.
We're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers.
Our preprocessing flow parses raw text and json with Entity and Relations annotations and creates 2 datasets for training a NER and Relations prediction heads.
Is there some good way to "fork" dataset-
EG
1. text + json -> Dataset1
1. Dataset1 -> DatasetNER
1. Dataset1 -> DatasetREL
or
1. text + json -> Dataset1
1. Dataset1 -> DatasetNER
1. Dataset1 + DatasetNER -> DatasetREL
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/328/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/328/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/327 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/327/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/327/comments | https://api.github.com/repos/huggingface/datasets/issues/327/events | https://github.com/huggingface/datasets/pull/327 | 648,312,858 | MDExOlB1bGxSZXF1ZXN0NDQyMTQyOTQw | 327 | set seed for suffling tests | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-06-30T16:21:34Z" | "2020-07-02T08:34:05Z" | "2020-07-02T08:34:04Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/327.diff",
"html_url": "https://github.com/huggingface/datasets/pull/327",
"merged_at": "2020-07-02T08:34:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/327.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/327"
} | Some tests were randomly failing because of a missing seed in a test for `train_test_split(shuffle=True)` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/327/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/327/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/326 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/326/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/326/comments | https://api.github.com/repos/huggingface/datasets/issues/326/events | https://github.com/huggingface/datasets/issues/326 | 648,126,103 | MDU6SXNzdWU2NDgxMjYxMDM= | 326 | Large dataset in Squad2-format | {
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/flozi00",
"id": 47894090,
"login": "flozi00",
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"repos_url": "https://api.github.com/users/flozi00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/flozi00"
} | [] | closed | false | null | [] | null | 8 | "2020-06-30T12:18:59Z" | "2020-07-09T09:01:50Z" | "2020-07-09T09:01:50Z" | CONTRIBUTOR | null | null | null | At the moment we are building an large question answering dataset and think about sharing it with the huggingface community.
Caused the computing power we splitted it into multiple tiles, but they are all in the same format.
Right now the most important facts about are this:
- Contexts: 1.047.671
- questions: 1.677.732
- Answers: 6.742.406
- unanswerable: 377.398
It is already cleaned
<pre><code>
train_data = [
{
'context': "this is the context",
'qas': [
{
'id': "00002",
'is_impossible': False,
'question': "whats is this",
'answers': [
{
'text': "answer",
'answer_start': 0
}
]
},
{
'id': "00003",
'is_impossible': False,
'question': "question2",
'answers': [
{
'text': "answer2",
'answer_start': 1
}
]
}
]
}
]
</code></pre>
Cause it is growing every day we are thinking about an structure like this:
We host an Json file, containing all the download links and the script can load it dynamically.
At the moment it is around ~20GB
Any advice how to handle this, or an ready to use template ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/326/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/326/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/325 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/325/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/325/comments | https://api.github.com/repos/huggingface/datasets/issues/325/events | https://github.com/huggingface/datasets/pull/325 | 647,601,592 | MDExOlB1bGxSZXF1ZXN0NDQxNTk3NTgw | 325 | Add SQuADShifts dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8953195?v=4",
"events_url": "https://api.github.com/users/millerjohnp/events{/privacy}",
"followers_url": "https://api.github.com/users/millerjohnp/followers",
"following_url": "https://api.github.com/users/millerjohnp/following{/other_user}",
"gists_url": "https://api.github.com/users/millerjohnp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/millerjohnp",
"id": 8953195,
"login": "millerjohnp",
"node_id": "MDQ6VXNlcjg5NTMxOTU=",
"organizations_url": "https://api.github.com/users/millerjohnp/orgs",
"received_events_url": "https://api.github.com/users/millerjohnp/received_events",
"repos_url": "https://api.github.com/users/millerjohnp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/millerjohnp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/millerjohnp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/millerjohnp"
} | [] | closed | false | null | [] | null | 1 | "2020-06-29T19:11:16Z" | "2020-06-30T17:07:31Z" | "2020-06-30T17:07:31Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/325.diff",
"html_url": "https://github.com/huggingface/datasets/pull/325",
"merged_at": "2020-06-30T17:07:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/325.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/325"
} | This PR adds the four new variants of the SQuAD dataset used in [The Effect of Natural Distribution Shift on Question Answering Models](https://arxiv.org/abs/2004.14444) to facilitate evaluating model robustness to distribution shift. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/325/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/325/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/324 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/324/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/324/comments | https://api.github.com/repos/huggingface/datasets/issues/324/events | https://github.com/huggingface/datasets/issues/324 | 647,525,725 | MDU6SXNzdWU2NDc1MjU3MjU= | 324 | Error when calculating glue score | {
"avatar_url": "https://avatars.githubusercontent.com/u/47185867?v=4",
"events_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/events{/privacy}",
"followers_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/followers",
"following_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/following{/other_user}",
"gists_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/D-i-l-r-u-k-s-h-i",
"id": 47185867,
"login": "D-i-l-r-u-k-s-h-i",
"node_id": "MDQ6VXNlcjQ3MTg1ODY3",
"organizations_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/orgs",
"received_events_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/received_events",
"repos_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/subscriptions",
"type": "User",
"url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i"
} | [] | closed | false | null | [] | null | 4 | "2020-06-29T16:53:48Z" | "2020-07-09T09:13:34Z" | "2020-07-09T09:13:34Z" | NONE | null | null | null | I was trying glue score along with other metrics here. But glue gives me this error;
```
import nlp
glue_metric = nlp.load_metric('glue',name="cola")
glue_score = glue_metric.compute(predictions, references)
```
```
---------------------------------------------------------------------------
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-8-b9210a524504> in <module>()
----> 1 glue_score = glue_metric.compute(predictions, references)
6 frames
/usr/local/lib/python3.6/dist-packages/nlp/metric.py in compute(self, predictions, references, timeout, **metrics_kwargs)
191 """
192 if predictions is not None:
--> 193 self.add_batch(predictions=predictions, references=references)
194 self.finalize(timeout=timeout)
195
/usr/local/lib/python3.6/dist-packages/nlp/metric.py in add_batch(self, predictions, references, **kwargs)
207 if self.writer is None:
208 self._init_writer()
--> 209 self.writer.write_batch(batch)
210
211 def add(self, prediction=None, reference=None, **kwargs):
/usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
155 if self.pa_writer is None:
156 self._build_writer(pa_table=pa.Table.from_pydict(batch_examples))
--> 157 pa_table: pa.Table = pa.Table.from_pydict(batch_examples, schema=self._schema)
158 if writer_batch_size is None:
159 writer_batch_size = self.writer_batch_size
/usr/local/lib/python3.6/dist-packages/pyarrow/types.pxi in __iter__()
/usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray()
/usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()
/usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()
TypeError: an integer is required (got type str)
```
I'm not sure whether I'm doing this wrong or whether it's an issue. I would like to know a workaround. Thank you. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/324/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/324/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/323 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/323/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/323/comments | https://api.github.com/repos/huggingface/datasets/issues/323/events | https://github.com/huggingface/datasets/pull/323 | 647,521,308 | MDExOlB1bGxSZXF1ZXN0NDQxNTMxOTY3 | 323 | Add package path to sys when downloading package as github archive | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [] | closed | false | null | [] | null | 2 | "2020-06-29T16:46:01Z" | "2020-07-30T14:00:23Z" | "2020-07-30T14:00:23Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/323.diff",
"html_url": "https://github.com/huggingface/datasets/pull/323",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/323.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/323"
} | This fixes the `coval.py` metric so that imports within the downloaded module work correctly. We can use a similar trick to add the BLEURT metric (@ankparikh)
@thomwolf not sure how you feel about adding to the `PYTHONPATH` from the script. This is the only way I could make it work with my understanding of `importlib` but there might be a more elegant method.
This PR fixes https://github.com/huggingface/nlp/issues/305 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/323/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/323/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/322 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/322/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/322/comments | https://api.github.com/repos/huggingface/datasets/issues/322/events | https://github.com/huggingface/datasets/pull/322 | 647,483,850 | MDExOlB1bGxSZXF1ZXN0NDQxNTAyMjc2 | 322 | output nested dict in get_nearest_examples | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-06-29T15:47:47Z" | "2020-07-02T08:33:33Z" | "2020-07-02T08:33:32Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/322.diff",
"html_url": "https://github.com/huggingface/datasets/pull/322",
"merged_at": "2020-07-02T08:33:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/322.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/322"
} | As we are using a columnar format like arrow as the backend for datasets, we expect to have a dictionary of columns when we slice a dataset like in this example:
```python
my_examples = dataset[0:10]
print(type(my_examples))
# >>> dict
print(my_examples["my_column"][0]
# >>> this is the first element of the column 'my_column'
```
Therefore I wanted to keep this logic when calling `get_nearest_examples` that returns the top 10 nearest examples:
```python
dataset.add_faiss_index(column="embeddings")
scores, examples = dataset.get_nearest_examples("embeddings", query=my_numpy_embedding)
print(type(examples))
# >>> dict
```
Previously it was returning a list[dict]. It was the only place that was using this output format.
To make it work I had to implement `__getitem__(key)` where `key` is a list.
This is different from `.select` because `.select` is a dataset transform (it returns a new dataset object) while `__getitem__` is an extraction method (it returns python dictionaries). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/322/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/322/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/321 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/321/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/321/comments | https://api.github.com/repos/huggingface/datasets/issues/321/events | https://github.com/huggingface/datasets/issues/321 | 647,271,526 | MDU6SXNzdWU2NDcyNzE1MjY= | 321 | ERROR:root:mwparserfromhell | {
"avatar_url": "https://avatars.githubusercontent.com/u/26505641?v=4",
"events_url": "https://api.github.com/users/Shiro-LK/events{/privacy}",
"followers_url": "https://api.github.com/users/Shiro-LK/followers",
"following_url": "https://api.github.com/users/Shiro-LK/following{/other_user}",
"gists_url": "https://api.github.com/users/Shiro-LK/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Shiro-LK",
"id": 26505641,
"login": "Shiro-LK",
"node_id": "MDQ6VXNlcjI2NTA1NjQx",
"organizations_url": "https://api.github.com/users/Shiro-LK/orgs",
"received_events_url": "https://api.github.com/users/Shiro-LK/received_events",
"repos_url": "https://api.github.com/users/Shiro-LK/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Shiro-LK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shiro-LK/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Shiro-LK"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | 10 | "2020-06-29T11:10:43Z" | "2022-02-14T15:21:46Z" | "2022-02-14T15:21:46Z" | NONE | null | null | null | Hi,
I am trying to download some wikipedia data but I got this error for spanish "es" (but there are maybe some others languages which have the same error I haven't tried all of them ).
`ERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.`
The code I have use was :
`dataset = load_dataset('wikipedia', '20200501.es', beam_runner='DirectRunner')`
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/321/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/321/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/320 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/320/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/320/comments | https://api.github.com/repos/huggingface/datasets/issues/320/events | https://github.com/huggingface/datasets/issues/320 | 647,188,167 | MDU6SXNzdWU2NDcxODgxNjc= | 320 | Blog Authorship Corpus, Non Matching Splits Sizes Error, nlp viewer | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | [] | null | 2 | "2020-06-29T07:36:35Z" | "2020-06-29T14:44:42Z" | "2020-06-29T14:44:42Z" | CONTRIBUTOR | null | null | null | Selecting `blog_authorship_corpus` in the nlp viewer throws the following error:
```
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train', num_bytes=614706451, num_examples=535568, dataset_name='blog_authorship_corpus')}, {'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation', num_bytes=32553710, num_examples=28521, dataset_name='blog_authorship_corpus')}]
Traceback:
File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp-viewer/run.py", line 172, in <module>
dts, fail = get(str(option.id), str(conf_option.name) if conf_option else None)
File "/home/sasha/streamlit/lib/streamlit/caching.py", line 591, in wrapped_func
return get_or_create_cached_value()
File "/home/sasha/streamlit/lib/streamlit/caching.py", line 575, in get_or_create_cached_value
return_value = func(*args, **kwargs)
File "/home/sasha/nlp-viewer/run.py", line 132, in get
builder_instance.download_and_prepare()
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 488, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 70, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
```
@srush @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/320/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/320/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/319 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/319/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/319/comments | https://api.github.com/repos/huggingface/datasets/issues/319/events | https://github.com/huggingface/datasets/issues/319 | 646,792,487 | MDU6SXNzdWU2NDY3OTI0ODc= | 319 | Nested sequences with dicts | {
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghomasHudson",
"id": 13795113,
"login": "ghomasHudson",
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghomasHudson"
} | [] | closed | false | null | [] | null | 1 | "2020-06-27T23:45:17Z" | "2020-07-03T10:22:00Z" | "2020-07-03T10:22:00Z" | CONTRIBUTOR | null | null | null | Am pretty much finished [adding a dataset](https://github.com/ghomasHudson/nlp/blob/DocRED/datasets/docred/docred.py) for [DocRED](https://github.com/thunlp/DocRED), but am getting an error when trying to add a nested `nlp.features.sequence(nlp.features.sequence({key:value,...}))`.
The original data is in this format:
```python
{
'title': "Title of wiki page",
'vertexSet': [
[
{ 'name': "mention_name",
'sent_id': "mention in which sentence",
'pos': ["postion of mention in a sentence"],
'type': "NER_type"},
{another mention}
],
[another entity]
]
...
}
```
So to represent this I've attempted to write:
```
...
features=nlp.Features({
"title": nlp.Value("string"),
"vertexSet": nlp.features.Sequence(nlp.features.Sequence({
"name": nlp.Value("string"),
"sent_id": nlp.Value("int32"),
"pos": nlp.features.Sequence(nlp.Value("int32")),
"type": nlp.Value("string"),
})),
...
}),
...
```
This is giving me the error:
```
pyarrow.lib.ArrowTypeError: Could not convert [{'pos': [[0,2], [2,4], [3,5]], "type": ["ORG", "ORG", "ORG"], "name": ["Lark Force", "Lark Force", "Lark Force", "sent_id": [0, 3, 4]}..... with type list: was not a dict, tuple, or recognized null value for conversion to struct type
```
Do we expect the pyarrow stuff to break when doing this deeper nesting? I've checked that it still works when you do `nlp.features.Sequence(nlp.features.Sequence(nlp.Value("string"))` or `nlp.features.Sequence({key:value,...})` just not nested sequences with a dict.
If it's not possible, I can always convert it to a shallower structure. I'd rather not change the DocRED authors' structure if I don't have to though. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/319/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/319/timeline | null | completed | false |