url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1B
| node_id
stringlengths 18
32
| number
int64 1
2.96k
| title
stringlengths 1
268
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,632B
| updated_at
int64 1,587B
1,632B
| closed_at
int64 1,587B
1,632B
⌀ | author_association
stringclasses 4
values | active_lock_reason
null | pull_request
dict | body
stringlengths 0
228k
⌀ | timeline_url
stringlengths 67
70
| performed_via_github_app
null | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/422 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/422/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/422/comments | https://api.github.com/repos/huggingface/datasets/issues/422/events | https://github.com/huggingface/datasets/pull/422 | 663,028,497 | MDExOlB1bGxSZXF1ZXN0NDU0NTE3MDU2 | 422 | - Corrected encoding for IMDB. | {
"login": "ghazi-f",
"id": 25091538,
"node_id": "MDQ6VXNlcjI1MDkxNTM4",
"avatar_url": "https://avatars.githubusercontent.com/u/25091538?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghazi-f",
"html_url": "https://github.com/ghazi-f",
"followers_url": "https://api.github.com/users/ghazi-f/followers",
"following_url": "https://api.github.com/users/ghazi-f/following{/other_user}",
"gists_url": "https://api.github.com/users/ghazi-f/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghazi-f/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghazi-f/subscriptions",
"organizations_url": "https://api.github.com/users/ghazi-f/orgs",
"repos_url": "https://api.github.com/users/ghazi-f/repos",
"events_url": "https://api.github.com/users/ghazi-f/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghazi-f/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,595,339,219,000 | 1,595,433,773,000 | 1,595,433,773,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/422",
"html_url": "https://github.com/huggingface/datasets/pull/422",
"diff_url": "https://github.com/huggingface/datasets/pull/422.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/422.patch"
} | The preparation phase (after the download phase) crashed on windows because of charmap encoding not being able to decode certain characters. This change suggested in Issue #347 fixes it for the IMDB dataset. | https://api.github.com/repos/huggingface/datasets/issues/422/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/421 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/421/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/421/comments | https://api.github.com/repos/huggingface/datasets/issues/421/events | https://github.com/huggingface/datasets/pull/421 | 662,213,864 | MDExOlB1bGxSZXF1ZXN0NDUzNzkzMzQ1 | 421 | Style change | {
"login": "lordtt13",
"id": 35500534,
"node_id": "MDQ6VXNlcjM1NTAwNTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/35500534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lordtt13",
"html_url": "https://github.com/lordtt13",
"followers_url": "https://api.github.com/users/lordtt13/followers",
"following_url": "https://api.github.com/users/lordtt13/following{/other_user}",
"gists_url": "https://api.github.com/users/lordtt13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lordtt13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lordtt13/subscriptions",
"organizations_url": "https://api.github.com/users/lordtt13/orgs",
"repos_url": "https://api.github.com/users/lordtt13/repos",
"events_url": "https://api.github.com/users/lordtt13/events{/privacy}",
"received_events_url": "https://api.github.com/users/lordtt13/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"What about the other PR #419 ?",
"Oh this is the PR where I ran make quality and make style and some previous files from master were changed",
"Oh right ! Let me fix the style myself if you don't mind"
] | 1,595,275,709,000 | 1,595,434,120,000 | 1,595,434,119,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/421",
"html_url": "https://github.com/huggingface/datasets/pull/421",
"diff_url": "https://github.com/huggingface/datasets/pull/421.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/421.patch"
} | make quality and make style ran on scripts | https://api.github.com/repos/huggingface/datasets/issues/421/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/420 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/420/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/420/comments | https://api.github.com/repos/huggingface/datasets/issues/420/events | https://github.com/huggingface/datasets/pull/420 | 662,029,782 | MDExOlB1bGxSZXF1ZXN0NDUzNjI5OTk2 | 420 | Better handle nested features | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,595,263,453,000 | 1,595,319,649,000 | 1,595,318,992,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/420",
"html_url": "https://github.com/huggingface/datasets/pull/420",
"diff_url": "https://github.com/huggingface/datasets/pull/420.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/420.patch"
} | Changes:
- added arrow schema to features conversion (it's going to be useful to fix #342 )
- make flatten handle deep features (useful for tfrecords conversion in #339 )
- add tests for flatten and features conversions
- the reader now returns the kwargs to instantiate a Dataset (fix circular dependencies) | https://api.github.com/repos/huggingface/datasets/issues/420/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/419 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/419/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/419/comments | https://api.github.com/repos/huggingface/datasets/issues/419/events | https://github.com/huggingface/datasets/pull/419 | 661,974,747 | MDExOlB1bGxSZXF1ZXN0NDUzNTgxNzQz | 419 | EmoContext dataset add | {
"login": "lordtt13",
"id": 35500534,
"node_id": "MDQ6VXNlcjM1NTAwNTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/35500534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lordtt13",
"html_url": "https://github.com/lordtt13",
"followers_url": "https://api.github.com/users/lordtt13/followers",
"following_url": "https://api.github.com/users/lordtt13/following{/other_user}",
"gists_url": "https://api.github.com/users/lordtt13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lordtt13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lordtt13/subscriptions",
"organizations_url": "https://api.github.com/users/lordtt13/orgs",
"repos_url": "https://api.github.com/users/lordtt13/repos",
"events_url": "https://api.github.com/users/lordtt13/events{/privacy}",
"received_events_url": "https://api.github.com/users/lordtt13/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,595,260,125,000 | 1,595,578,921,000 | 1,595,578,920,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/419",
"html_url": "https://github.com/huggingface/datasets/pull/419",
"diff_url": "https://github.com/huggingface/datasets/pull/419.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/419.patch"
} | EmoContext Dataset add
Signed-off-by: lordtt13 <[email protected]> | https://api.github.com/repos/huggingface/datasets/issues/419/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/418 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/418/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/418/comments | https://api.github.com/repos/huggingface/datasets/issues/418/events | https://github.com/huggingface/datasets/issues/418 | 661,914,873 | MDU6SXNzdWU2NjE5MTQ4NzM= | 418 | Addition of google drive links to dl_manager | {
"login": "lordtt13",
"id": 35500534,
"node_id": "MDQ6VXNlcjM1NTAwNTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/35500534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lordtt13",
"html_url": "https://github.com/lordtt13",
"followers_url": "https://api.github.com/users/lordtt13/followers",
"following_url": "https://api.github.com/users/lordtt13/following{/other_user}",
"gists_url": "https://api.github.com/users/lordtt13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lordtt13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lordtt13/subscriptions",
"organizations_url": "https://api.github.com/users/lordtt13/orgs",
"repos_url": "https://api.github.com/users/lordtt13/repos",
"events_url": "https://api.github.com/users/lordtt13/events{/privacy}",
"received_events_url": "https://api.github.com/users/lordtt13/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think the problem is the way you wrote your urls. Try the following structure to see `https://drive.google.com/uc?export=download&id=your_file_id` . \r\n\r\n@lhoestq ",
"Oh sorry, I think `_get_drive_url` is doing that. \r\n\r\nHave you tried to use `dl_manager.download_and_extract(_get_drive_url(_TRAIN_URL)`? it should work with google drive links.\r\n",
"Yes it worked, thank you!"
] | 1,595,256,722,000 | 1,595,259,572,000 | 1,595,259,572,000 | CONTRIBUTOR | null | null | Hello there, I followed the template to create a download script of my own, which works fine for me, although I had to shun the dl_manager because it was downloading nothing from the drive links and instead use gdown.
This is the script for me:
```python
class EmoConfig(nlp.BuilderConfig):
"""BuilderConfig for SQUAD."""
def __init__(self, **kwargs):
"""BuilderConfig for EmoContext.
Args:
**kwargs: keyword arguments forwarded to super.
"""
super(EmoConfig, self).__init__(**kwargs)
_TEST_URL = "https://drive.google.com/file/d/1Hn5ytHSSoGOC4sjm3wYy0Dh0oY_oXBbb/view?usp=sharing"
_TRAIN_URL = "https://drive.google.com/file/d/12Uz59TYg_NtxOy7SXraYeXPMRT7oaO7X/view?usp=sharing"
class EmoDataset(nlp.GeneratorBasedBuilder):
""" SemEval-2019 Task 3: EmoContext Contextual Emotion Detection in Text. Version 1.0.0 """
VERSION = nlp.Version("1.0.0")
force = False
def _info(self):
return nlp.DatasetInfo(
description=_DESCRIPTION,
features=nlp.Features(
{
"text": nlp.Value("string"),
"label": nlp.features.ClassLabel(names=["others", "happy", "sad", "angry"]),
}
),
supervised_keys=None,
homepage="https://www.aclweb.org/anthology/S19-2005/",
citation=_CITATION,
)
def _get_drive_url(self, url):
base_url = 'https://drive.google.com/uc?id='
split_url = url.split('/')
return base_url + split_url[5]
def _split_generators(self, dl_manager):
"""Returns SplitGenerators."""
if(not os.path.exists("emo-train.json") or self.force):
gdown.download(self._get_drive_url(_TRAIN_URL), "emo-train.json", quiet = True)
if(not os.path.exists("emo-test.json") or self.force):
gdown.download(self._get_drive_url(_TEST_URL), "emo-test.json", quiet = True)
return [
nlp.SplitGenerator(
name=nlp.Split.TRAIN,
gen_kwargs={
"filepath": "emo-train.json",
"split": "train",
},
),
nlp.SplitGenerator(
name=nlp.Split.TEST,
gen_kwargs={"filepath": "emo-test.json", "split": "test"},
),
]
def _generate_examples(self, filepath, split):
""" Yields examples. """
with open(filepath, 'rb') as f:
data = json.load(f)
for id_, text, label in zip(data["text"].keys(), data["text"].values(), data["Label"].values()):
yield id_, {
"text": text,
"label": label,
}
```
Can someone help me in adding gdrive links to be used with default dl_manager or adding gdown as another dl_manager, because I'd like to add this dataset to nlp's official database. | https://api.github.com/repos/huggingface/datasets/issues/418/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/417 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/417/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/417/comments | https://api.github.com/repos/huggingface/datasets/issues/417/events | https://github.com/huggingface/datasets/pull/417 | 661,804,054 | MDExOlB1bGxSZXF1ZXN0NDUzNDMyODE5 | 417 | Fix docstrins multiple metrics instances | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,595,250,539,000 | 1,595,411,460,000 | 1,595,411,459,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/417",
"html_url": "https://github.com/huggingface/datasets/pull/417",
"diff_url": "https://github.com/huggingface/datasets/pull/417.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/417.patch"
} | We change the docstrings of `nlp.Metric.compute`, `nlp.Metric.add` and `nlp.Metric.add_batch` depending on which metric is instantiated. However we had issues when instantiating multiple metrics (docstrings were duplicated).
This should fix #304 | https://api.github.com/repos/huggingface/datasets/issues/417/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/416 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/416/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/416/comments | https://api.github.com/repos/huggingface/datasets/issues/416/events | https://github.com/huggingface/datasets/pull/416 | 661,635,393 | MDExOlB1bGxSZXF1ZXN0NDUzMjg1NTM4 | 416 | Fix xtreme panx directory | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"great, I think I did not download the data the way you do, but yours is more reasonable."
] | 1,595,239,757,000 | 1,595,319,346,000 | 1,595,319,344,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/416",
"html_url": "https://github.com/huggingface/datasets/pull/416",
"diff_url": "https://github.com/huggingface/datasets/pull/416.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/416.patch"
} | Fix #412 | https://api.github.com/repos/huggingface/datasets/issues/416/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/415 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/415/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/415/comments | https://api.github.com/repos/huggingface/datasets/issues/415/events | https://github.com/huggingface/datasets/issues/415 | 660,687,076 | MDU6SXNzdWU2NjA2ODcwNzY= | 415 | Something is wrong with WMT 19 kk-en dataset | {
"login": "ChenghaoMou",
"id": 32014649,
"node_id": "MDQ6VXNlcjMyMDE0NjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/32014649?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChenghaoMou",
"html_url": "https://github.com/ChenghaoMou",
"followers_url": "https://api.github.com/users/ChenghaoMou/followers",
"following_url": "https://api.github.com/users/ChenghaoMou/following{/other_user}",
"gists_url": "https://api.github.com/users/ChenghaoMou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChenghaoMou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChenghaoMou/subscriptions",
"organizations_url": "https://api.github.com/users/ChenghaoMou/orgs",
"repos_url": "https://api.github.com/users/ChenghaoMou/repos",
"events_url": "https://api.github.com/users/ChenghaoMou/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChenghaoMou/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [] | null | [] | 1,595,146,731,000 | 1,595,238,866,000 | null | NONE | null | null | The translation in the `train` set does not look right:
```
>>>import nlp
>>>from nlp import load_dataset
>>>dataset = load_dataset('wmt19', 'kk-en')
>>>dataset["train"]["translation"][0]
{'kk': 'Trumpian Uncertainty', 'en': 'Трамптық белгісіздік'}
>>>dataset["validation"]["translation"][0]
{'kk': 'Ақша-несие саясатының сценарийін қайта жазсақ', 'en': 'Rewriting the Monetary-Policy Script'}
``` | https://api.github.com/repos/huggingface/datasets/issues/415/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/414 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/414/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/414/comments | https://api.github.com/repos/huggingface/datasets/issues/414/events | https://github.com/huggingface/datasets/issues/414 | 660,654,013 | MDU6SXNzdWU2NjA2NTQwMTM= | 414 | from_dict delete? | {
"login": "hackerxiaobai",
"id": 22817243,
"node_id": "MDQ6VXNlcjIyODE3MjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/22817243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hackerxiaobai",
"html_url": "https://github.com/hackerxiaobai",
"followers_url": "https://api.github.com/users/hackerxiaobai/followers",
"following_url": "https://api.github.com/users/hackerxiaobai/following{/other_user}",
"gists_url": "https://api.github.com/users/hackerxiaobai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hackerxiaobai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hackerxiaobai/subscriptions",
"organizations_url": "https://api.github.com/users/hackerxiaobai/orgs",
"repos_url": "https://api.github.com/users/hackerxiaobai/repos",
"events_url": "https://api.github.com/users/hackerxiaobai/events{/privacy}",
"received_events_url": "https://api.github.com/users/hackerxiaobai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"`from_dict` was added in #350 that was unfortunately not included in the 0.3.0 release. It's going to be included in the next release that will be out pretty soon though.\r\nRight now if you want to use `from_dict` you have to install the package from the master branch\r\n```\r\npip install git+https://github.com/huggingface/nlp.git\r\n```",
"> `from_dict` was added in #350 that was unfortunately not included in the 0.3.0 release. It's going to be included in the next release that will be out pretty soon though.\r\n> Right now if you want to use `from_dict` you have to install the package from the master branch\r\n> \r\n> ```\r\n> pip install git+https://github.com/huggingface/nlp.git\r\n> ```\r\nOK, thank you.\r\n"
] | 1,595,142,516,000 | 1,595,298,077,000 | 1,595,298,077,000 | NONE | null | null | AttributeError: type object 'Dataset' has no attribute 'from_dict' | https://api.github.com/repos/huggingface/datasets/issues/414/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/413 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/413/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/413/comments | https://api.github.com/repos/huggingface/datasets/issues/413/events | https://github.com/huggingface/datasets/issues/413 | 660,063,655 | MDU6SXNzdWU2NjAwNjM2NTU= | 413 | Is there a way to download only NQ dev? | {
"login": "tholor",
"id": 1563902,
"node_id": "MDQ6VXNlcjE1NjM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1563902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tholor",
"html_url": "https://github.com/tholor",
"followers_url": "https://api.github.com/users/tholor/followers",
"following_url": "https://api.github.com/users/tholor/following{/other_user}",
"gists_url": "https://api.github.com/users/tholor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tholor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tholor/subscriptions",
"organizations_url": "https://api.github.com/users/tholor/orgs",
"repos_url": "https://api.github.com/users/tholor/repos",
"events_url": "https://api.github.com/users/tholor/events{/privacy}",
"received_events_url": "https://api.github.com/users/tholor/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Unfortunately it's not possible to download only the dev set of NQ.\r\n\r\nI think we could add a way to download only the test set by adding a custom configuration to the processing script though.",
"Ok, got it. I think this could be a valuable feature - especially for large datasets like NQ, but potentially also others. \r\nFor us, it will in this case make the difference of using the library or keeping the old downloads of the raw dev datasets. \r\nHowever, I don't know if that fits into your plans with the library and can also understand if you don't want to support this.",
"I don't think we could force this behavior generally since the dataset script authors are free to organize the file download as they want (sometimes the mapping between split and files can be very much nontrivial) but we can add an additional configuration for Natural Question indeed as @lhoestq indicate."
] | 1,595,068,103,000 | 1,596,027,980,000 | null | NONE | null | null | Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)?
As we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data.
I tried
```
dataset = nlp.load_dataset('natural_questions', split="validation", beam_runner="DirectRunner")
```
But this still triggered a big download of presumably the whole dataset. Is there any way of doing this or are splits / slicing options only available after downloading?
Thanks! | https://api.github.com/repos/huggingface/datasets/issues/413/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/412 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/412/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/412/comments | https://api.github.com/repos/huggingface/datasets/issues/412/events | https://github.com/huggingface/datasets/issues/412 | 660,047,139 | MDU6SXNzdWU2NjAwNDcxMzk= | 412 | Unable to load XTREME dataset from disk | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, you have to provide the full path to the downloaded file for example `/home/lewtum/..`",
"I was able to repro. Opening a PR to fix that.\r\nThanks for reporting this issue !",
"Thanks for the rapid fix @lhoestq!"
] | 1,595,066,100,000 | 1,595,319,344,000 | 1,595,319,344,000 | MEMBER | null | null | Hi 🤗 team!
## Description of the problem
Following the [docs](https://huggingface.co/nlp/loading_datasets.html?highlight=xtreme#manually-downloading-files) I'm trying to load the `PAN-X.fr` dataset from the [XTREME](https://github.com/google-research/xtreme) benchmark.
I have manually downloaded the `AmazonPhotos.zip` file from [here](https://www.amazon.com/clouddrive/share/d3KGCRCIYwhKJF0H3eWA26hjg2ZCRhjpEQtDL70FSBN?_encoding=UTF8&%2AVersion%2A=1&%2Aentries%2A=0&mgh=1) and am running into a `FileNotFoundError` when I point to the location of the dataset.
As far as I can tell, the problem is that `AmazonPhotos.zip` decompresses to `panx_dataset` and `load_dataset()` is not looking in the correct path:
```
# path where load_dataset is looking for fr.tar.gz
/root/.cache/huggingface/datasets/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6/
# path where it actually exists
/root/.cache/huggingface/datasets/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6/panx_dataset/
```
## Steps to reproduce the problem
1. Manually download the XTREME benchmark from [here](https://www.amazon.com/clouddrive/share/d3KGCRCIYwhKJF0H3eWA26hjg2ZCRhjpEQtDL70FSBN?_encoding=UTF8&%2AVersion%2A=1&%2Aentries%2A=0&mgh=1)
2. Run the following code snippet
```python
from nlp import load_dataset
# AmazonPhotos.zip is in the root of the folder
dataset = load_dataset("xtreme", "PAN-X.fr", data_dir='./')
```
3. Here is the stack trace
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-4-26786bb5fa93> in <module>
----> 1 dataset = load_dataset("xtreme", "PAN-X.fr", data_dir='./')
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
522 download_mode=download_mode,
523 ignore_verifications=ignore_verifications,
--> 524 save_infos=save_infos,
525 )
526
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
430 verify_infos = not save_infos and not ignore_verifications
431 self._download_and_prepare(
--> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
433 )
434 # Sync info
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
464 split_dict = SplitDict(dataset_name=self.name)
465 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 466 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
467 # Checksums verification
468 if verify_infos:
/usr/local/lib/python3.6/dist-packages/nlp/datasets/xtreme/b8c2ed3583a7a7ac60b503576dfed3271ac86757628897e945bd329c43b8a746/xtreme.py in _split_generators(self, dl_manager)
725 panx_dl_dir = dl_manager.extract(panx_path)
726 lang = self.config.name.split(".")[1]
--> 727 lang_folder = dl_manager.extract(os.path.join(panx_dl_dir, lang + ".tar.gz"))
728 return [
729 nlp.SplitGenerator(
/usr/local/lib/python3.6/dist-packages/nlp/utils/download_manager.py in extract(self, path_or_paths)
196 """
197 return map_nested(
--> 198 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths,
199 )
200
/usr/local/lib/python3.6/dist-packages/nlp/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_tuple)
170 return tuple(mapped)
171 # Singleton
--> 172 return function(data_struct)
173
174
/usr/local/lib/python3.6/dist-packages/nlp/utils/download_manager.py in <lambda>(path)
196 """
197 return map_nested(
--> 198 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths,
199 )
200
/usr/local/lib/python3.6/dist-packages/nlp/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
203 elif urlparse(url_or_filename).scheme == "":
204 # File, but it doesn't exist.
--> 205 raise FileNotFoundError("Local file {} doesn't exist".format(url_or_filename))
206 else:
207 # Something unknown
FileNotFoundError: Local file /root/.cache/huggingface/datasets/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6/fr.tar.gz doesn't exist
```
## OS and hardware
```
- `nlp` version: 0.3.0
- Platform: Linux-4.15.0-72-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
``` | https://api.github.com/repos/huggingface/datasets/issues/412/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/411 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/411/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/411/comments | https://api.github.com/repos/huggingface/datasets/issues/411/events | https://github.com/huggingface/datasets/pull/411 | 659,393,398 | MDExOlB1bGxSZXF1ZXN0NDUxMjQxOTQy | 411 | Sbf | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,595,002,785,000 | 1,595,322,826,000 | 1,595,322,825,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/411",
"html_url": "https://github.com/huggingface/datasets/pull/411",
"diff_url": "https://github.com/huggingface/datasets/pull/411.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/411.patch"
} | This PR adds the Social Bias Frames Dataset (ACL 2020) .
dataset homepage: https://homes.cs.washington.edu/~msap/social-bias-frames/ | https://api.github.com/repos/huggingface/datasets/issues/411/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/410 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/410/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/410/comments | https://api.github.com/repos/huggingface/datasets/issues/410/events | https://github.com/huggingface/datasets/pull/410 | 659,242,871 | MDExOlB1bGxSZXF1ZXN0NDUxMTEzMTI3 | 410 | 20newsgroup | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,991,277,000 | 1,595,228,729,000 | 1,595,228,728,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/410",
"html_url": "https://github.com/huggingface/datasets/pull/410",
"diff_url": "https://github.com/huggingface/datasets/pull/410.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/410.patch"
} | Add 20Newsgroup dataset.
#353 | https://api.github.com/repos/huggingface/datasets/issues/410/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/409 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/409/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/409/comments | https://api.github.com/repos/huggingface/datasets/issues/409/events | https://github.com/huggingface/datasets/issues/409 | 659,128,611 | MDU6SXNzdWU2NTkxMjg2MTE= | 409 | train_test_split error: 'dict' object has no attribute 'deepcopy' | {
"login": "morganmcg1",
"id": 20516801,
"node_id": "MDQ6VXNlcjIwNTE2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/20516801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/morganmcg1",
"html_url": "https://github.com/morganmcg1",
"followers_url": "https://api.github.com/users/morganmcg1/followers",
"following_url": "https://api.github.com/users/morganmcg1/following{/other_user}",
"gists_url": "https://api.github.com/users/morganmcg1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/morganmcg1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/morganmcg1/subscriptions",
"organizations_url": "https://api.github.com/users/morganmcg1/orgs",
"repos_url": "https://api.github.com/users/morganmcg1/repos",
"events_url": "https://api.github.com/users/morganmcg1/events{/privacy}",
"received_events_url": "https://api.github.com/users/morganmcg1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"It was fixed in 2ddd18d139d3047c9c3abe96e1e7d05bb360132c.\r\nCould you pull the latest changes from master @morganmcg1 ?",
"Thanks @lhoestq, works fine now!"
] | 1,594,982,188,000 | 1,595,342,092,000 | 1,595,342,092,000 | NONE | null | null | `train_test_split` is giving me an error when I try and call it:
`'dict' object has no attribute 'deepcopy'`
## To reproduce
```
dataset = load_dataset('glue', 'mrpc', split='train')
dataset = dataset.train_test_split(test_size=0.2)
```
## Full Stacktrace
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-12-feb740dbec9a> in <module>
1 dataset = load_dataset('glue', 'mrpc', split='train')
----> 2 dataset = dataset.train_test_split(test_size=0.2)
~/anaconda3/envs/fastai2_me/lib/python3.7/site-packages/nlp/arrow_dataset.py in train_test_split(self, test_size, train_size, shuffle, seed, generator, keep_in_memory, load_from_cache_file, train_cache_file_name, test_cache_file_name, writer_batch_size)
1032 "writer_batch_size": writer_batch_size,
1033 }
-> 1034 train_kwargs = cache_kwargs.deepcopy()
1035 train_kwargs["split"] = "train"
1036 test_kwargs = cache_kwargs.deepcopy()
AttributeError: 'dict' object has no attribute 'deepcopy'
``` | https://api.github.com/repos/huggingface/datasets/issues/409/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/408 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/408/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/408/comments | https://api.github.com/repos/huggingface/datasets/issues/408/events | https://github.com/huggingface/datasets/pull/408 | 659,064,144 | MDExOlB1bGxSZXF1ZXN0NDUwOTU1MTE0 | 408 | Add tests datasets gcp | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,977,807,000 | 1,594,978,017,000 | 1,594,978,016,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/408",
"html_url": "https://github.com/huggingface/datasets/pull/408",
"diff_url": "https://github.com/huggingface/datasets/pull/408.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/408.patch"
} | Some datasets are available on our google cloud storage in arrow format, so that the users don't need to process the data.
These tests make sure that they're always available. It also makes sure that their scripts are in sync between S3 and the repo.
This should avoid future issues like #407 | https://api.github.com/repos/huggingface/datasets/issues/408/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/407 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/407/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/407/comments | https://api.github.com/repos/huggingface/datasets/issues/407/events | https://github.com/huggingface/datasets/issues/407 | 658,672,736 | MDU6SXNzdWU2NTg2NzI3MzY= | 407 | MissingBeamOptions for Wikipedia 20200501.en | {
"login": "mitchellgordon95",
"id": 7490438,
"node_id": "MDQ6VXNlcjc0OTA0Mzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7490438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mitchellgordon95",
"html_url": "https://github.com/mitchellgordon95",
"followers_url": "https://api.github.com/users/mitchellgordon95/followers",
"following_url": "https://api.github.com/users/mitchellgordon95/following{/other_user}",
"gists_url": "https://api.github.com/users/mitchellgordon95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mitchellgordon95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mitchellgordon95/subscriptions",
"organizations_url": "https://api.github.com/users/mitchellgordon95/orgs",
"repos_url": "https://api.github.com/users/mitchellgordon95/repos",
"events_url": "https://api.github.com/users/mitchellgordon95/events{/privacy}",
"received_events_url": "https://api.github.com/users/mitchellgordon95/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Fixed. Could you try again @mitchellgordon95 ?\r\nIt was due a file not being updated on S3.\r\n\r\nWe need to make sure all the datasets scripts get updated properly @julien-c ",
"Works for me! Thanks.",
"I found the same issue with almost any language other than English. (For English, it works). Will someone need to update the file on S3 again?",
"This is because only some languages are already preprocessed (en, de, fr, it) and stored on our google storage.\r\nWe plan to have a systematic way to preprocess more wikipedia languages in the future.\r\n\r\nFor the other languages you have to process them on your side using apache beam. That's why the lib asks for a Beam runner."
] | 1,594,943,283,000 | 1,610,451,676,000 | 1,594,995,868,000 | CONTRIBUTOR | null | null | There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available):
```
nlp.load_dataset('wikipedia', "20200501.en", split='train')
```
And now, having pulled master, I get:
```
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, total: 34.06 GiB) to /home/hltcoe/mgordon/.cache/huggingface/datasets/wikipedia/20200501.en/1.0.0/76b0b2747b679bb0ee7a1621e50e5a6378477add0c662668a324a5bc07d516dd...
Traceback (most recent call last):
File "scripts/download.py", line 11, in <module>
fire.Fire(download_pretrain)
File "/home/hltcoe/mgordon/.conda/envs/huggingface/lib/python3.6/site-packages/fire/core.py", line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/hltcoe/mgordon/.conda/envs/huggingface/lib/python3.6/site-packages/fire/core.py", line 468, in _Fire
target=component.__name__)
File "/home/hltcoe/mgordon/.conda/envs/huggingface/lib/python3.6/site-packages/fire/core.py", line 672, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "scripts/download.py", line 6, in download_pretrain
nlp.load_dataset('wikipedia', "20200501.en", split='train')
File "/exp/mgordon/nlp/src/nlp/load.py", line 534, in load_dataset
save_infos=save_infos,
File "/exp/mgordon/nlp/src/nlp/builder.py", line 460, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/exp/mgordon/nlp/src/nlp/builder.py", line 870, in _download_and_prepare
"\n\t`{}`".format(usage_example)
nlp.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, S
park, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/
If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory).
Example of usage:
`load_dataset('wikipedia', '20200501.en', beam_runner='DirectRunner')`
``` | https://api.github.com/repos/huggingface/datasets/issues/407/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/406 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/406/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/406/comments | https://api.github.com/repos/huggingface/datasets/issues/406/events | https://github.com/huggingface/datasets/issues/406 | 658,581,764 | MDU6SXNzdWU2NTg1ODE3NjQ= | 406 | Faster Shuffling? | {
"login": "mitchellgordon95",
"id": 7490438,
"node_id": "MDQ6VXNlcjc0OTA0Mzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7490438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mitchellgordon95",
"html_url": "https://github.com/mitchellgordon95",
"followers_url": "https://api.github.com/users/mitchellgordon95/followers",
"following_url": "https://api.github.com/users/mitchellgordon95/following{/other_user}",
"gists_url": "https://api.github.com/users/mitchellgordon95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mitchellgordon95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mitchellgordon95/subscriptions",
"organizations_url": "https://api.github.com/users/mitchellgordon95/orgs",
"repos_url": "https://api.github.com/users/mitchellgordon95/repos",
"events_url": "https://api.github.com/users/mitchellgordon95/events{/privacy}",
"received_events_url": "https://api.github.com/users/mitchellgordon95/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think the slowness here probably come from the fact that we are copying from and to python.\r\n\r\n@lhoestq for all the `select`-based methods I think we should stay in Arrow format and update the writer so that it can accept Arrow tables or batches as well. What do you think?",
"> @lhoestq for all the `select`-based methods I think we should stay in Arrow format and update the writer so that it can accept Arrow tables or batches as well. What do you think?\r\n\r\nI just tried with `writer.write_table` with tables of 1000 elements and it's slower that the solution in #405 \r\n\r\nOn my side (select 10 000 examples):\r\n- Original implementation: 12s\r\n- Batched solution: 100ms\r\n- solution using arrow tables: 350ms\r\n\r\nI'll try with arrays and record batches to see if we can make it work.",
"I tried using `.take` from pyarrow recordbatches but it doesn't improve the speed that much:\r\n```python\r\nimport nlp\r\nimport numpy as np\r\n\r\ndset = nlp.Dataset.from_file(\"dummy_test_select.arrow\") # dummy dataset with 100000 examples like {\"a\": \"h\"*512}\r\nindices = np.random.randint(0, 100_000, 1000_000)\r\n```\r\n\r\n```python\r\n%%time\r\nbatch_size = 10_000\r\nwriter = ArrowWriter(schema=dset.schema, path=\"dummy_path\",\r\n writer_batch_size=1000, disable_nullable=False)\r\nfor i in tqdm(range(0, len(indices), batch_size)):\r\n table = pa.concat_tables(dset._data.slice(int(i), 1) for i in indices[i : min(len(indices), i + batch_size)])\r\n batch = table.to_pydict()\r\n writer.write_batch(batch)\r\nwriter.finalize()\r\n# 9.12s\r\n```\r\n\r\n\r\n```python\r\n%%time\r\nbatch_size = 10_000\r\nwriter = ArrowWriter(schema=dset.schema, path=\"dummy_path\", \r\n writer_batch_size=1000, disable_nullable=False)\r\nfor i in tqdm(range(0, len(indices), batch_size)):\r\n batch_indices = indices[i : min(len(indices), i + batch_size)]\r\n # First, extract only the indices that we need with a mask\r\n mask = [False] * len(dset)\r\n for k in batch_indices:\r\n mask[k] = True\r\n t_batch = dset._data.filter(pa.array(mask))\r\n # Second, build the list of indices for the filtered table, and taking care of duplicates\r\n rev_positions = {}\r\n duplicates = 0\r\n for i, j in enumerate(sorted(batch_indices)):\r\n if j in rev_positions:\r\n duplicates += 1\r\n else:\r\n rev_positions[j] = i - duplicates\r\n rev_map = [rev_positions[j] for j in batch_indices]\r\n # Third, use `.take` from the combined recordbatch\r\n t_combined = t_batch.combine_chunks() # load in memory\r\n recordbatch = t_combined.to_batches()[0]\r\n table = pa.Table.from_arrays(\r\n [recordbatch[c].take(pa.array(rev_map)) for c in range(len(dset._data.column_names))],\r\n schema=writer.schema\r\n )\r\n writer.write_table(table)\r\nwriter.finalize()\r\n# 3.2s\r\n```\r\n",
"Shuffling is now significantly faster thanks to #513 \r\nFeel free to play with it now :)\r\n\r\nClosing this one, but feel free to re-open if you have other questions"
] | 1,594,934,513,000 | 1,599,489,926,000 | 1,599,489,925,000 | CONTRIBUTOR | null | null | Consider shuffling bookcorpus:
```
dataset = nlp.load_dataset('bookcorpus', split='train')
dataset.shuffle()
```
According to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `writer_batch_size=1000`.
But I can also just write the lines to a text file:
```
batch_size = 100000
with open('tmp.txt', 'w+') as out_f:
for i in tqdm(range(0, len(dataset), batch_size)):
batch = dataset[i:i+batch_size]['text']
print("\n".join(batch), file=out_f)
```
Which completes in a couple minutes, followed by `shuf tmp.txt > tmp2.txt` which completes in under a minute. And finally,
```
dataset = nlp.load_dataset('text', data_files='tmp2.txt')
```
Which completes in under 10 minutes. I read up on Apache Arrow this morning, and it seems like the columnar data format is not especially well-suited to shuffling rows, since moving items around requires a lot of book-keeping.
Is shuffle inherently slow, or am I just using it wrong? And if it is slow, would it make sense to try converting the data to a row-based format on disk and then shuffling? (Instead of calling select with a random permutation, as is currently done.) | https://api.github.com/repos/huggingface/datasets/issues/406/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/405 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/405/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/405/comments | https://api.github.com/repos/huggingface/datasets/issues/405/events | https://github.com/huggingface/datasets/pull/405 | 658,580,192 | MDExOlB1bGxSZXF1ZXN0NDUwNTI1MTc3 | 405 | Make select() faster by batching reads | {
"login": "mitchellgordon95",
"id": 7490438,
"node_id": "MDQ6VXNlcjc0OTA0Mzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7490438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mitchellgordon95",
"html_url": "https://github.com/mitchellgordon95",
"followers_url": "https://api.github.com/users/mitchellgordon95/followers",
"following_url": "https://api.github.com/users/mitchellgordon95/following{/other_user}",
"gists_url": "https://api.github.com/users/mitchellgordon95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mitchellgordon95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mitchellgordon95/subscriptions",
"organizations_url": "https://api.github.com/users/mitchellgordon95/orgs",
"repos_url": "https://api.github.com/users/mitchellgordon95/repos",
"events_url": "https://api.github.com/users/mitchellgordon95/events{/privacy}",
"received_events_url": "https://api.github.com/users/mitchellgordon95/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,934,385,000 | 1,595,005,544,000 | 1,595,004,686,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/405",
"html_url": "https://github.com/huggingface/datasets/pull/405",
"diff_url": "https://github.com/huggingface/datasets/pull/405.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/405.patch"
} | Here's a benchmark:
```
dataset = nlp.load_dataset('bookcorpus', split='train')
start = time.time()
dataset.select(np.arange(1000), reader_batch_size=1, load_from_cache_file=False)
end = time.time()
print(f'{end - start}')
start = time.time()
dataset.select(np.arange(1000), reader_batch_size=1000, load_from_cache_file=False)
end = time.time()
print(f'{end - start}')
```
Without batching, select takes around 1.27 seconds. With batching, it takes around 0.01 seconds. The slowness was upsetting me because dataset.shuffle() was supposed to take ~27 hours for bookcorpus. Now with the fix it takes ~2.5 hours (which still is pretty slow, but I'll open a separate issue for that). | https://api.github.com/repos/huggingface/datasets/issues/405/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/404 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/404/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/404/comments | https://api.github.com/repos/huggingface/datasets/issues/404/events | https://github.com/huggingface/datasets/pull/404 | 658,400,987 | MDExOlB1bGxSZXF1ZXN0NDUwMzY4Mjg4 | 404 | Add seed in metrics | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,920,425,000 | 1,595,239,955,000 | 1,595,239,954,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/404",
"html_url": "https://github.com/huggingface/datasets/pull/404",
"diff_url": "https://github.com/huggingface/datasets/pull/404.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/404.patch"
} | With #361 we noticed that some metrics were not deterministic.
In this PR I allow the user to specify numpy's seed when instantiating a metric with `load_metric`.
The seed is set only when `compute` is called, and reset afterwards.
Moreover when calling `compute` with the same metric instance (i.e. same experiment_id), the metric will always return the same results given the same inputs. This is the case even if the seed is was not specified by the user, as the previous seed is going to be reused.
However, instantiating twice a metric (two different experiments) without specifying a seed can create different results. | https://api.github.com/repos/huggingface/datasets/issues/404/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/403 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/403/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/403/comments | https://api.github.com/repos/huggingface/datasets/issues/403/events | https://github.com/huggingface/datasets/pull/403 | 658,325,756 | MDExOlB1bGxSZXF1ZXN0NDUwMzAzNjI2 | 403 | return python objects instead of arrays by default | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,914,712,000 | 1,594,985,821,000 | 1,594,985,820,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/403",
"html_url": "https://github.com/huggingface/datasets/pull/403",
"diff_url": "https://github.com/huggingface/datasets/pull/403.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/403.patch"
} | We were using to_pandas() to convert from arrow types, however it returns numpy arrays instead of python lists.
I fixed it by using to_pydict/to_pylist instead.
Fix #387
It was mentioned in https://github.com/huggingface/transformers/issues/5729
| https://api.github.com/repos/huggingface/datasets/issues/403/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/402 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/402/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/402/comments | https://api.github.com/repos/huggingface/datasets/issues/402/events | https://github.com/huggingface/datasets/pull/402 | 658,001,288 | MDExOlB1bGxSZXF1ZXN0NDUwMDI2NTE0 | 402 | Search qa | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,890,010,000 | 1,594,909,620,000 | 1,594,909,619,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/402",
"html_url": "https://github.com/huggingface/datasets/pull/402",
"diff_url": "https://github.com/huggingface/datasets/pull/402.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/402.patch"
} | add SearchQA dataset
#336 | https://api.github.com/repos/huggingface/datasets/issues/402/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/401 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/401/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/401/comments | https://api.github.com/repos/huggingface/datasets/issues/401/events | https://github.com/huggingface/datasets/pull/401 | 657,996,252 | MDExOlB1bGxSZXF1ZXN0NDUwMDIyNTc0 | 401 | add web_questions | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"What does the `nlp-cli dummy_data` command returns ?",
"`test.json` -> `test` \r\nand \r\n`train.json` -> `train`\r\n\r\nas shown by the `nlp-cli dummy_data` command ;-)",
"LGTM for merge @lhoestq - I let you merge if you want to."
] | 1,594,889,699,000 | 1,596,694,580,000 | 1,596,694,579,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/401",
"html_url": "https://github.com/huggingface/datasets/pull/401",
"diff_url": "https://github.com/huggingface/datasets/pull/401.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/401.patch"
} | add Web Question dataset
#336
Maybe @patrickvonplaten you can help with the dummy_data structure? it still broken | https://api.github.com/repos/huggingface/datasets/issues/401/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/400 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/400/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/400/comments | https://api.github.com/repos/huggingface/datasets/issues/400/events | https://github.com/huggingface/datasets/pull/400 | 657,975,600 | MDExOlB1bGxSZXF1ZXN0NDUwMDA1MDU5 | 400 | Web questions | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,888,109,000 | 1,594,889,451,000 | 1,594,888,974,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/400",
"html_url": "https://github.com/huggingface/datasets/pull/400",
"diff_url": "https://github.com/huggingface/datasets/pull/400.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/400.patch"
} | add the WebQuestion dataset
#336 | https://api.github.com/repos/huggingface/datasets/issues/400/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/399 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/399/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/399/comments | https://api.github.com/repos/huggingface/datasets/issues/399/events | https://github.com/huggingface/datasets/pull/399 | 657,841,433 | MDExOlB1bGxSZXF1ZXN0NDQ5ODkxNTEy | 399 | Spelling mistake | {
"login": "BlancRay",
"id": 9410067,
"node_id": "MDQ6VXNlcjk0MTAwNjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9410067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BlancRay",
"html_url": "https://github.com/BlancRay",
"followers_url": "https://api.github.com/users/BlancRay/followers",
"following_url": "https://api.github.com/users/BlancRay/following{/other_user}",
"gists_url": "https://api.github.com/users/BlancRay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BlancRay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BlancRay/subscriptions",
"organizations_url": "https://api.github.com/users/BlancRay/orgs",
"repos_url": "https://api.github.com/users/BlancRay/repos",
"events_url": "https://api.github.com/users/BlancRay/events{/privacy}",
"received_events_url": "https://api.github.com/users/BlancRay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks!"
] | 1,594,874,278,000 | 1,594,882,188,000 | 1,594,882,177,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/399",
"html_url": "https://github.com/huggingface/datasets/pull/399",
"diff_url": "https://github.com/huggingface/datasets/pull/399.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/399.patch"
} | In "Formatting the dataset" part, "The two toehr modifications..." should be "The two other modifications..." ,the word "other" wrong spelled as "toehr". | https://api.github.com/repos/huggingface/datasets/issues/399/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/398 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/398/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/398/comments | https://api.github.com/repos/huggingface/datasets/issues/398/events | https://github.com/huggingface/datasets/pull/398 | 657,511,962 | MDExOlB1bGxSZXF1ZXN0NDQ5NjE1OTk1 | 398 | Add inline links | {
"login": "Bharat123rox",
"id": 13381361,
"node_id": "MDQ6VXNlcjEzMzgxMzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bharat123rox",
"html_url": "https://github.com/Bharat123rox",
"followers_url": "https://api.github.com/users/Bharat123rox/followers",
"following_url": "https://api.github.com/users/Bharat123rox/following{/other_user}",
"gists_url": "https://api.github.com/users/Bharat123rox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bharat123rox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bharat123rox/subscriptions",
"organizations_url": "https://api.github.com/users/Bharat123rox/orgs",
"repos_url": "https://api.github.com/users/Bharat123rox/repos",
"events_url": "https://api.github.com/users/Bharat123rox/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bharat123rox/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Do you mind adding a link to the much more extended pages on adding and sharing a dataset in the new documentation?",
"Sure, I will do that too"
] | 1,594,832,644,000 | 1,595,412,862,000 | 1,595,412,862,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/398",
"html_url": "https://github.com/huggingface/datasets/pull/398",
"diff_url": "https://github.com/huggingface/datasets/pull/398.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/398.patch"
} | Add inline links to `Contributing.md` | https://api.github.com/repos/huggingface/datasets/issues/398/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/397 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/397/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/397/comments | https://api.github.com/repos/huggingface/datasets/issues/397/events | https://github.com/huggingface/datasets/pull/397 | 657,510,856 | MDExOlB1bGxSZXF1ZXN0NDQ5NjE1MDA4 | 397 | Add contiguous sharding | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,832,578,000 | 1,595,005,171,000 | 1,595,005,171,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/397",
"html_url": "https://github.com/huggingface/datasets/pull/397",
"diff_url": "https://github.com/huggingface/datasets/pull/397.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/397.patch"
} | This makes dset.shard() play nice with nlp.concatenate_datasets(). When I originally wrote the shard() method, I was thinking about a distributed training scenario, but https://github.com/huggingface/nlp/pull/389 also uses it for splitting the dataset for distributed preprocessing.
Usage:
```
nlp.concatenate_datasets([dset.shard(n, i, contiguous=True) for i in range(n)])
``` | https://api.github.com/repos/huggingface/datasets/issues/397/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/396 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/396/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/396/comments | https://api.github.com/repos/huggingface/datasets/issues/396/events | https://github.com/huggingface/datasets/pull/396 | 657,477,952 | MDExOlB1bGxSZXF1ZXN0NDQ5NTg3MDQ4 | 396 | Fix memory issue when doing select | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,829,704,000 | 1,594,886,852,000 | 1,594,886,851,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/396",
"html_url": "https://github.com/huggingface/datasets/pull/396",
"diff_url": "https://github.com/huggingface/datasets/pull/396.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/396.patch"
} | We were passing the `nlp.Dataset` object to get the hash for the new dataset's file name.
Fix #395 | https://api.github.com/repos/huggingface/datasets/issues/396/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/395 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/395/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/395/comments | https://api.github.com/repos/huggingface/datasets/issues/395/events | https://github.com/huggingface/datasets/issues/395 | 657,454,983 | MDU6SXNzdWU2NTc0NTQ5ODM= | 395 | Memory issue when doing select | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,594,827,818,000 | 1,594,886,851,000 | 1,594,886,851,000 | MEMBER | null | null | As noticed in #389, the following code loads the entire wikipedia in memory.
```python
import nlp
w = nlp.load_dataset("wikipedia", "20200501.en", split="train")
w.select([0])
```
This is caused by [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/arrow_dataset.py#L626) for some reason, that tries to serialize the function with all the wikipedia data with it.
It's not the case with `.map` or `.filter`.
However functions that are based on `.select` like `.shuffle`, `.shard`, `.train_test_split`, `.sort` are affected.
| https://api.github.com/repos/huggingface/datasets/issues/395/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/394 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/394/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/394/comments | https://api.github.com/repos/huggingface/datasets/issues/394/events | https://github.com/huggingface/datasets/pull/394 | 657,425,548 | MDExOlB1bGxSZXF1ZXN0NDQ5NTQzNTE0 | 394 | Remove remaining nested dict | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,825,552,000 | 1,594,885,192,000 | 1,594,885,191,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/394",
"html_url": "https://github.com/huggingface/datasets/pull/394",
"diff_url": "https://github.com/huggingface/datasets/pull/394.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/394.patch"
} | This PR deletes the remaining unnecessary nested dict
#378 | https://api.github.com/repos/huggingface/datasets/issues/394/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/393 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/393/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/393/comments | https://api.github.com/repos/huggingface/datasets/issues/393/events | https://github.com/huggingface/datasets/pull/393 | 657,330,911 | MDExOlB1bGxSZXF1ZXN0NDQ5NDY1MTAz | 393 | Fix extracted files directory for the DownloadManager | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,817,995,000 | 1,595,005,336,000 | 1,595,005,334,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/393",
"html_url": "https://github.com/huggingface/datasets/pull/393",
"diff_url": "https://github.com/huggingface/datasets/pull/393.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/393.patch"
} | The cache dir was often cluttered by extracted files because of the download manager.
For downloaded files, we are using the `downloads` directory to make things easier to navigate, but extracted files were still placed at the root of the cache directory. To fix that I changed the directory for extracted files to cache_dir/downloads/extracted. | https://api.github.com/repos/huggingface/datasets/issues/393/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/392 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/392/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/392/comments | https://api.github.com/repos/huggingface/datasets/issues/392/events | https://github.com/huggingface/datasets/pull/392 | 657,313,738 | MDExOlB1bGxSZXF1ZXN0NDQ5NDUwOTkx | 392 | Style change detection | {
"login": "ghomasHudson",
"id": 13795113,
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghomasHudson",
"html_url": "https://github.com/ghomasHudson",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,816,334,000 | 1,595,337,516,000 | 1,595,006,003,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/392",
"html_url": "https://github.com/huggingface/datasets/pull/392",
"diff_url": "https://github.com/huggingface/datasets/pull/392.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/392.patch"
} | Another [PAN task](https://pan.webis.de/clef20/pan20-web/style-change-detection.html). This time about identifying when the style/author changes in documents.
- There's the possibility of adding the [PAN19](https://zenodo.org/record/3577602) and PAN18 style change detection tasks too (these are datasets whose labels are a subset of PAN20's). These would probably make more sense as separate datasets (like wmt is now)
- I've converted the integer 0,1 values to a boolean
- Using manually downloaded data again. This might be changed at some point following the discussion in https://github.com/huggingface/nlp/pull/349. | https://api.github.com/repos/huggingface/datasets/issues/392/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/391 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/391/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/391/comments | https://api.github.com/repos/huggingface/datasets/issues/391/events | https://github.com/huggingface/datasets/issues/391 | 656,991,432 | MDU6SXNzdWU2NTY5OTE0MzI= | 391 | 🌟 [Metric Request] WOOD score | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2459308248,
"node_id": "MDU6TGFiZWwyNDU5MzA4MjQ4",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20request",
"name": "metric request",
"color": "d4c5f9",
"default": false,
"description": "Requesting to add a new metric"
}
] | open | false | null | [] | null | [] | 1,594,775,797,000 | 1,603,813,408,000 | null | NONE | null | null | WOOD score paper : https://arxiv.org/pdf/2007.06898.pdf
Abstract :
>Models that surpass human performance on several popular benchmarks display significant degradation in performance on exposure to Out of Distribution (OOD) data. Recent research has shown that models overfit to spurious biases and ‘hack’ datasets, in lieu of learning generalizable features like humans. In order to stop the inflation in model performance – and thus overestimation in AI systems’ capabilities – we propose a simple and novel evaluation metric, WOOD Score, that encourages generalization during evaluation. | https://api.github.com/repos/huggingface/datasets/issues/391/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/390 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/390/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/390/comments | https://api.github.com/repos/huggingface/datasets/issues/390/events | https://github.com/huggingface/datasets/pull/390 | 656,956,384 | MDExOlB1bGxSZXF1ZXN0NDQ5MTYxMzY3 | 390 | Concatenate datasets | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Looks cool :)\r\n\r\nI feel like \r\n```python\r\nconcatenated_dataset = dataset1.concatenate(dataset2)\r\n```\r\ncould be more natural. What do you think ?\r\n\r\nAlso could you also concatenate the `nlp.Dataset._data_files` ?\r\n```python\r\nreturn cls(table, info=info, split=split, data_files=self._data_files + other_dataset._data_files)\r\n```",
"I feel like \"WikiBooks\" would be a multi task dataset that could fit in the #217 discussion.\r\nNot sure concatenate should be the solution for a multi task dataset.",
"Thanks for the suggestion! `dset1.concatenate(dset2)` does feel more natural. Although this seems to be a different \"class\" of transformation function than map() or filter(), acting on two datasets rather than on one. I would prefer the function signature treat both datasets symmetrically.\r\n\r\nPython lists have `list1 + list2` or `list1.extend(list2)`.\r\nNumPy has `np.concatenate((arr1, arr2))`.\r\nPandas has `pd.join((df1, df2))`.\r\nPyTorch has `ConcatDataset((dset1, dset2))`.\r\n\r\nGiven the symmetrical treatment and clear communication that this creates a new object, rather than a simple chaining on the first, my preference is now for `nlp.concatenate((dset1, dset2))`. This would place the function in the same API class as `nlp.load_dataset`. Does that work?",
"The multi-task discussion is interesting, thanks for pointing me to that! I'll be focusing on T5 in a few weeks, so I'm sure I'll have many opinions then :). For now, I think a simple concatenate feature is important and orthogonal to that discussion. For example, a user may want to create a custom dataset that joins Wikipedia with their own custom text.",
"> Given the symmetrical treatment and clear communication that this creates a new object, rather than a simple chaining on the first, my preference is now for `nlp.concatenate((dset1, dset2))`. This would place the function in the same API class as `nlp.load_dataset`. Does that work?\r\n\r\nYep I like this idea. Maybe `nlp.concatenate_datasets()` ?\r\n\r\n> For now, I think a simple concatenate feature is important and orthogonal to that discussion. For example, a user may want to create a custom dataset that joins Wikipedia with their own custom text.\r\n\r\nI agree :)",
"Great, just updated!"
] | 1,594,769,077,000 | 1,595,411,398,000 | 1,595,411,398,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/390",
"html_url": "https://github.com/huggingface/datasets/pull/390",
"diff_url": "https://github.com/huggingface/datasets/pull/390.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/390.patch"
} | I'm constructing the "WikiBooks" dataset, which is a concatenation of Wikipedia & BookCorpus. So I implemented the `Dataset.from_concat()` method, which concatenates two datasets with the same schema.
This would also be useful if someone wants to pretrain on a large generic dataset + their own custom dataset. Not in love with the method name, so would love to hear suggestions.
Usage:
```python
from nlp import Dataset, load_dataset
data1, data2 = {"id": [0, 1, 2]}, {"id": [3, 4, 5]}
dset1, dset2 = Dataset.from_dict(data1), Dataset.from_dict(data2)
dset_concat = Dataset.from_concat([dset1, dset2])
print(dset_concat)
# Dataset(schema: {'id': 'int64'}, num_rows: 6)
``` | https://api.github.com/repos/huggingface/datasets/issues/390/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/389 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/389/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/389/comments | https://api.github.com/repos/huggingface/datasets/issues/389/events | https://github.com/huggingface/datasets/pull/389 | 656,921,768 | MDExOlB1bGxSZXF1ZXN0NDQ5MTMyOTU5 | 389 | Fix pickling of SplitDict | {
"login": "mitchellgordon95",
"id": 7490438,
"node_id": "MDQ6VXNlcjc0OTA0Mzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7490438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mitchellgordon95",
"html_url": "https://github.com/mitchellgordon95",
"followers_url": "https://api.github.com/users/mitchellgordon95/followers",
"following_url": "https://api.github.com/users/mitchellgordon95/following{/other_user}",
"gists_url": "https://api.github.com/users/mitchellgordon95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mitchellgordon95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mitchellgordon95/subscriptions",
"organizations_url": "https://api.github.com/users/mitchellgordon95/orgs",
"repos_url": "https://api.github.com/users/mitchellgordon95/repos",
"events_url": "https://api.github.com/users/mitchellgordon95/events{/privacy}",
"received_events_url": "https://api.github.com/users/mitchellgordon95/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"By the way, the reason this is an issue for me is because I want to be able to \"save\" changes made to a dataset by writing something to disk. In this case, I would like to pre-process my dataset once, and then train multiple models on the dataset later without having to re-process the data. \r\n\r\nIs pickling/unpickling the Dataset object the \"sanctioned\" way of doing this? Or is there a better way that I'm missing?",
"I've had success with saving datasets to disk via:\r\n\r\n```python\r\ncache_file = \"/my/dset.cache\"\r\ndset = dset.map(whatever, cache_file_name=cache_file)\r\n# then, later\r\ndset = nlp.Dataset.from_file(cache_file)\r\n```\r\n\r\nThis restores the dataset with all the attributes I need.",
"Thanks @jarednielsen, that makes sense. I'm a little wary of messing with the cache files, since I still don't really understand what's going on under the hood with Apache Arrow. \r\n\r\nRelated question: I'd like to do parallel pre-processing of the dataset. I know how to break the dataset up via sharding, but is there any way to combine the shards back together again once the processing is done? Right now I'm probably just going to iterate over each shard, write the contexts to a txt file, and then cat the txt files, but it feels like there ought to be a nicer way to concatenate datasets.",
"Haha, opened a PR for that functionality about an hour ago: https://github.com/huggingface/nlp/pull/390. Glad we're on the same page :)",
"Datasets are not supposed to be pickled as pickle tries to put all the dataset in memory if I'm not wrong (and write all the data on disk).\r\nThe concatenate method however is a very cool feature, looking forward to having it merged :)",
"Ah, yes, you are correct. The pickle file contains the whole dataset, not just the cache names, which is not quite what I expected.\r\n\r\nI tried adding a warning when pickling a Dataset, to prevent others like me from trying it. Interestingly, however, the warning is raised whenever any function on the dataset is called (select, shard, etc.). \r\n\r\n```\r\nimport nlp\r\nwiki = nlp.load_dataset('wikipedia', split='train')\r\nwiki = wiki.shard(16, 0) # Triggers pickling of dataset\r\n```\r\n\r\nI believe this is because [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/arrow_dataset.py#L626), which gets the function signature, is actually pickling the whole dataset (and thereby serializing all the data to text). I checked by printing that string, and sure enough it was full of Wikipedia articles.\r\n\r\nI don't think the whole pickling thing is worth the effort, so I'll close the PR. But I did want to mention this serialization behavior in case it's not intended.",
"Thanks for reporting. Indeed this line shouldn't serialize the data but only the function itself.\r\n",
"Keeping this open because I would like to keep brainstorming a bit on this.\r\n\r\nOne note on this is that we should have a clean serialization workflow, probably one that could serialize to a few formats (arrow, parquet and tfrecords come to mind).",
"This PR could be useful. My specific use case is `multiprocessing.Pool` for parallel preprocessing (because of the Python tokenization bottleneck at https://github.com/huggingface/transformers/issues/5729). I shard a large dataset, run map on each shard within a multiprocessing pool, and then concatenate them back together. This is only possible if a dataset can be pickled, otherwise the logic is much more complex. There's no reason to make it un-picklable, even if it's not the recommended usage.\r\n\r\n```python\r\nimport nlp\r\nimport multiprocessing\r\n\r\ndef func(ex):\r\n return {\"text\": \"Prefix: \" + ex[\"text\"]}\r\n\r\ndef map_helper(dset):\r\n return dset.map(func)\r\n\r\nn_shards = 16\r\ndset = nlp.load_dataset(\"wikitext-2-raw-v1\", split=\"train\")\r\nwith multiprocessing.Pool(processes=n_shards) as pool:\r\n shards = pool.map(map_helper, [dset.shard(n_shards, i, contiguous=True) for i in range(n_shards)])\r\ndset = nlp.concatenate_datasets(shards)\r\n```\r\n",
"Yes I agree.\r\n#423 just got merged and should allow serialization of `SplitDict`. Could you try it and see if it'ok on your side now ?",
"Closing this, assuming it was fixed in #423."
] | 1,594,763,619,000 | 1,596,551,890,000 | 1,596,551,890,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/389",
"html_url": "https://github.com/huggingface/datasets/pull/389",
"diff_url": "https://github.com/huggingface/datasets/pull/389.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/389.patch"
} | It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, 'sentencized_wiki_dataset.pt')
```
However, upon unpickling the dataset via torch.load(...), this error is raised:
```
ValueError("Cannot add elem. Use .add() instead.")
```
On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class.
The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`.
Testing:
- Manually pickled and unpickled a modified wikipedia dataset.
- Ran `make style`
I would be happy to run any other tests, but I couldn't find any in the contributing guidelines. | https://api.github.com/repos/huggingface/datasets/issues/389/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/388 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/388/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/388/comments | https://api.github.com/repos/huggingface/datasets/issues/388/events | https://github.com/huggingface/datasets/issues/388 | 656,707,497 | MDU6SXNzdWU2NTY3MDc0OTc= | 388 | 🐛 [Dataset] Cannot download wmt14, wmt15 and wmt17 | {
"login": "SamuelCahyawijaya",
"id": 2826602,
"node_id": "MDQ6VXNlcjI4MjY2MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2826602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SamuelCahyawijaya",
"html_url": "https://github.com/SamuelCahyawijaya",
"followers_url": "https://api.github.com/users/SamuelCahyawijaya/followers",
"following_url": "https://api.github.com/users/SamuelCahyawijaya/following{/other_user}",
"gists_url": "https://api.github.com/users/SamuelCahyawijaya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SamuelCahyawijaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamuelCahyawijaya/subscriptions",
"organizations_url": "https://api.github.com/users/SamuelCahyawijaya/orgs",
"repos_url": "https://api.github.com/users/SamuelCahyawijaya/repos",
"events_url": "https://api.github.com/users/SamuelCahyawijaya/events{/privacy}",
"received_events_url": "https://api.github.com/users/SamuelCahyawijaya/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"similar slow download speed here for nlp.load_dataset('wmt14', 'fr-en')\r\n`\r\nDownloading: 100%|██████████████████████████████████████████████████████████| 658M/658M [1:00:42<00:00, 181kB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████████████| 918M/918M [1:39:38<00:00, 154kB/s]\r\nDownloading: 2%|▉ | 40.9M/2.37G [04:48<5:03:06, 128kB/s]\r\n`\r\nCould we just download a specific subdataset in 'wmt14', such as 'newstest14'? ",
"> The code runs but the download speed is extremely slow, the same behaviour is not observed on wmt16 and wmt18\r\n\r\nThe original source for the files may provide slow download speeds.\r\nWe can probably host these files ourselves.\r\n\r\n> When trying to download wmt17 zh-en, I got the following error:\r\n> ConnectionError: Couldn't reach https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-zh.tar.gz\r\n\r\nLooks like the file`UNv1.0.en-zh.tar.gz` is missing, or the url changed. We need to fix that\r\n\r\n> Could we just download a specific subdataset in 'wmt14', such as 'newstest14'?\r\n\r\nRight now I don't think it's possible. Maybe @patrickvonplaten knows more about it\r\n",
"Yeah, the download speed is sadly always extremely slow :-/. \r\nI will try to check out the `wmt17 zh-en` bug :-) ",
"Maybe this can be used - https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-zh.tar.gz.00 "
] | 1,594,741,001,000 | 1,596,639,392,000 | null | NONE | null | null | 1. I try downloading `wmt14`, `wmt15`, `wmt17`, `wmt19` with the following code:
```
nlp.load_dataset('wmt14','de-en')
nlp.load_dataset('wmt15','de-en')
nlp.load_dataset('wmt17','de-en')
nlp.load_dataset('wmt19','de-en')
```
The code runs but the download speed is **extremely slow**, the same behaviour is not observed on `wmt16` and `wmt18`
2. When trying to download `wmt17 zh-en`, I got the following error:
> ConnectionError: Couldn't reach https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-zh.tar.gz | https://api.github.com/repos/huggingface/datasets/issues/388/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/387 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/387/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/387/comments | https://api.github.com/repos/huggingface/datasets/issues/387/events | https://github.com/huggingface/datasets/issues/387 | 656,361,357 | MDU6SXNzdWU2NTYzNjEzNTc= | 387 | Conversion through to_pandas output numpy arrays for lists instead of python objects | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"To convert from arrow type we have three options: to_numpy, to_pandas and to_pydict/to_pylist.\r\n\r\n- to_numpy and to_pandas return numpy arrays instead of lists but are very fast.\r\n- to_pydict/to_pylist can be 100x slower and become the bottleneck for reading data, but at least they return lists.\r\n\r\nMaybe we can have to_pydict/to_pylist as the default and use to_numpy or to_pandas when the format (set by `set_format`) is 'numpy' or 'pandas'"
] | 1,594,707,841,000 | 1,594,985,820,000 | 1,594,985,820,000 | MEMBER | null | null | In a related question, the conversion through to_pandas output numpy arrays for the lists instead of python objects.
Here is an example:
```python
>>> dataset._data.slice(key, 1).to_pandas().to_dict("list")
{'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .'], 'sentence2': ['Referring to him as only " the witness " , Amrozi accused his brother of deliberately distorting his evidence .'], 'label': [1], 'idx': [0], 'input_ids': [array([ 101, 7277, 2180, 5303, 4806, 1117, 1711, 117, 2292,
1119, 1270, 107, 1103, 7737, 107, 117, 1104, 9938,
4267, 12223, 21811, 1117, 2554, 119, 102])], 'token_type_ids': [array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0])], 'attention_mask': [array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1])]}
>>> type(dataset._data.slice(key, 1).to_pandas().to_dict("list")['input_ids'][0])
<class 'numpy.ndarray'>
>>> dataset._data.slice(key, 1).to_pydict()
{'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .'], 'sentence2': ['Referring to him as only " the witness " , Amrozi accused his brother of deliberately distorting his evidence .'], 'label': [1], 'idx': [0], 'input_ids': [[101, 7277, 2180, 5303, 4806, 1117, 1711, 117, 2292, 1119, 1270, 107, 1103, 7737, 107, 117, 1104, 9938, 4267, 12223, 21811, 1117, 2554, 119, 102]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}
``` | https://api.github.com/repos/huggingface/datasets/issues/387/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/386 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/386/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/386/comments | https://api.github.com/repos/huggingface/datasets/issues/386/events | https://github.com/huggingface/datasets/pull/386 | 655,839,067 | MDExOlB1bGxSZXF1ZXN0NDQ4MjQ1NDI4 | 386 | Update dataset loading and features - Add TREC dataset | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I just copied the files that are on google storage to follow the new `_relative_data_dir ` format. It should be good to merge now :)\r\n\r\nWell actually it seems there are some merge conflicts to fix first"
] | 1,594,645,818,000 | 1,594,887,478,000 | 1,594,887,478,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/386",
"html_url": "https://github.com/huggingface/datasets/pull/386",
"diff_url": "https://github.com/huggingface/datasets/pull/386.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/386.patch"
} | This PR:
- add a template for a new dataset script
- update the caching structure so that the path to the cached data files is also a function of the dataset loading script hash. This way when you update a loading script the data will be automatically updated instead of falling back to the previous version (which is usually a outdated). This makes it in particular easier to iterate when writing a new dataset loading script.
- fix a bug in the `ClassLabel` feature and make it more flexible so that its methods `str2int` and `int2str` can also accept list, numpy arrays and PyTorch/TensorFlow tensors.
- add the TREC-6 dataset | https://api.github.com/repos/huggingface/datasets/issues/386/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/385 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/385/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/385/comments | https://api.github.com/repos/huggingface/datasets/issues/385/events | https://github.com/huggingface/datasets/pull/385 | 655,663,997 | MDExOlB1bGxSZXF1ZXN0NDQ4MTAzMjY5 | 385 | Remove unnecessary nested dict | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"We can probably scan the dataset scripts with a regexpr to try to identify this pattern cc @patrickvonplaten maybe",
"@mariamabarham This script should work. I tested it for a couple of datasets. There might be exceptions where the script breaks - did not test everything.\r\n\r\n```python\r\n#!/usr/bin/env python3\r\n\r\nfrom nlp import prepare_module, DownloadConfig, import_main_class, hf_api\r\nimport tempfile\r\n\r\n\r\ndef scan_for_nested_unnecessary_dict(dataset_name):\r\n\r\n def load_builder_class(dataset_name):\r\n module_path = prepare_module(dataset_name, download_config=DownloadConfig(force_download=True))\r\n return import_main_class(module_path)\r\n\r\n def load_configs(dataset_name):\r\n builder_cls = load_builder_class(dataset_name)\r\n if len(builder_cls.BUILDER_CONFIGS) == 0:\r\n return [None]\r\n return builder_cls.BUILDER_CONFIGS\r\n\r\n def scan_features_for_nested_dict(features):\r\n is_sequence = False\r\n if hasattr(features, \"_type\"):\r\n if features._type != 'Sequence':\r\n return False\r\n else:\r\n is_sequence = True\r\n features = features.feature\r\n\r\n if isinstance(features, list):\r\n for value in features:\r\n if scan_features_for_nested_dict(value):\r\n return True\r\n return False\r\n\r\n elif isinstance(features, dict):\r\n for key, value in features.items():\r\n if is_sequence and len(features.keys()) == 1 and hasattr(features[key], \"_type\") and features[key]._type != \"Sequence\":\r\n return True\r\n if scan_features_for_nested_dict(value):\r\n return True\r\n return False\r\n elif hasattr(features, \"_type\"):\r\n return False\r\n else:\r\n raise ValueError(f\"{features} should be either a list, a dict or a feature\")\r\n\r\n configs = load_configs(dataset_name)\r\n\r\n for config in configs:\r\n with tempfile.TemporaryDirectory() as processed_temp_dir:\r\n # create config and dataset\r\n dataset_builder_cls = load_builder_class(dataset_name)\r\n name = config.name if config is not None else None\r\n dataset_builder = dataset_builder_cls(name=name, cache_dir=processed_temp_dir)\r\n\r\n is_nested_dict_in_dataset = scan_features_for_nested_dict(dataset_builder._info().features)\r\n if is_nested_dict_in_dataset:\r\n print(f\"{dataset_name} with {name} needs refactoring\")\r\n\r\n\r\nif __name__ == \"__main__\":\r\n scan_for_nested_unnecessary_dict(\"race\") # prints True\r\n scan_for_nested_unnecessary_dict(\"mlqa\") # prints True\r\n scan_for_nested_unnecessary_dict(\"squad\") # prints Nothing\r\n\r\n # ran the following lines for 1min and seems to work -> didn't check for all datasets though\r\n# api = hf_api.HfApi()\r\n# all_datasets = [x.id for x in api.dataset_list(with_community_datasets=False)]\r\n# for dataset in all_datasets:\r\n# scan_for_nested_unnecessary_dict(dataset)\r\n```",
"> @mariamabarham This script should work. I tested it for a couple of datasets. There might be exceptions where the script breaks - did not test everything.\r\n> \r\n> ```python\r\n> #!/usr/bin/env python3\r\n> \r\n> from nlp import prepare_module, DownloadConfig, import_main_class, hf_api\r\n> import tempfile\r\n> \r\n> \r\n> def scan_for_nested_unnecessary_dict(dataset_name):\r\n> \r\n> def load_builder_class(dataset_name):\r\n> module_path = prepare_module(dataset_name, download_config=DownloadConfig(force_download=True))\r\n> return import_main_class(module_path)\r\n> \r\n> def load_configs(dataset_name):\r\n> builder_cls = load_builder_class(dataset_name)\r\n> if len(builder_cls.BUILDER_CONFIGS) == 0:\r\n> return [None]\r\n> return builder_cls.BUILDER_CONFIGS\r\n> \r\n> def scan_features_for_nested_dict(features):\r\n> is_sequence = False\r\n> if hasattr(features, \"_type\"):\r\n> if features._type != 'Sequence':\r\n> return False\r\n> else:\r\n> is_sequence = True\r\n> features = features.feature\r\n> \r\n> if isinstance(features, list):\r\n> for value in features:\r\n> if scan_features_for_nested_dict(value):\r\n> return True\r\n> return False\r\n> \r\n> elif isinstance(features, dict):\r\n> for key, value in features.items():\r\n> if is_sequence and len(features.keys()) == 1 and hasattr(features[key], \"_type\") and features[key]._type != \"Sequence\":\r\n> return True\r\n> if scan_features_for_nested_dict(value):\r\n> return True\r\n> return False\r\n> else:\r\n> raise ValueError(f\"{features} should be either a list of a dict\")\r\n> \r\n> configs = load_configs(dataset_name)\r\n> \r\n> for config in configs:\r\n> with tempfile.TemporaryDirectory() as processed_temp_dir:\r\n> # create config and dataset\r\n> dataset_builder_cls = load_builder_class(dataset_name)\r\n> name = config.name if config is not None else None\r\n> dataset_builder = dataset_builder_cls(name=name, cache_dir=processed_temp_dir)\r\n> \r\n> is_nested_dict_in_dataset = scan_features_for_nested_dict(dataset_builder._info().features)\r\n> if is_nested_dict_in_dataset:\r\n> print(f\"{dataset_name} with {name} needs refactoring\")\r\n> \r\n> \r\n> if __name__ == \"__main__\":\r\n> scan_for_nested_unnecessary_dict(\"race\") # prints True\r\n> scan_for_nested_unnecessary_dict(\"mlqa\") # prints True\r\n> scan_for_nested_unnecessary_dict(\"squad\") # prints Nothing\r\n> \r\n> # ran the following lines for 1min and seems to work -> didn't check for all datasets though\r\n> # api = hf_api.HfApi()\r\n> # all_datasets = [x.id for x in api.dataset_list(with_community_datasets=False)]\r\n> # for dataset in all_datasets:\r\n> # scan_for_nested_unnecessary_dict(dataset)\r\n> ```\r\n\r\nGreat, I will try it",
"I'm not sure the work on this PR was finished @lhoestq cc @mariamabarham @patrickvonplaten ",
"Sorry for that, apparently there are other datasets that could have unnecessary nested dicts.\r\nWe can have another PR to scan and fix the other datasets.\r\n"
] | 1,594,629,983,000 | 1,594,812,458,000 | 1,594,807,433,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/385",
"html_url": "https://github.com/huggingface/datasets/pull/385",
"diff_url": "https://github.com/huggingface/datasets/pull/385.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/385.patch"
} | This PR is removing unnecessary nested dictionary used in some datasets. For now the following datasets are updated:
- MLQA
- RACE
Will be adding more if necessary.
#378 | https://api.github.com/repos/huggingface/datasets/issues/385/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/383 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/383/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/383/comments | https://api.github.com/repos/huggingface/datasets/issues/383/events | https://github.com/huggingface/datasets/pull/383 | 655,291,201 | MDExOlB1bGxSZXF1ZXN0NDQ3ODI0OTky | 383 | Adding the Linguistic Code-switching Evaluation (LinCE) benchmark | {
"login": "gaguilar",
"id": 5833357,
"node_id": "MDQ6VXNlcjU4MzMzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5833357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gaguilar",
"html_url": "https://github.com/gaguilar",
"followers_url": "https://api.github.com/users/gaguilar/followers",
"following_url": "https://api.github.com/users/gaguilar/following{/other_user}",
"gists_url": "https://api.github.com/users/gaguilar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gaguilar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gaguilar/subscriptions",
"organizations_url": "https://api.github.com/users/gaguilar/orgs",
"repos_url": "https://api.github.com/users/gaguilar/repos",
"events_url": "https://api.github.com/users/gaguilar/events{/privacy}",
"received_events_url": "https://api.github.com/users/gaguilar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I am checking the details of the CI log for the failed test, but I don't see how the error relates to the code I added; the error is coming from a config builder different than the `LinceConfig`, and it crashes when `self.config.data_files` because is self.config is None. I would appreciate if someone could help me find out where I could have messed things up :)\r\n\r\nAlso, the real and dummy data tests passed before committing and pushing my changes.\r\n\r\nThanks a lot in advance!\r\n\r\n```\r\n=================================== FAILURES ===================================\r\n____________________ AWSDatasetTest.test_load_dataset_text _____________________\r\n\r\nself = <tests.test_dataset_common.AWSDatasetTest testMethod=test_load_dataset_text>\r\ndataset_name = 'text'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs)\r\n\r\ntests/test_dataset_common.py:243: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:137: in check_load_dataset\r\n try_from_hf_gcs=False,\r\n../.local/lib/python3.6/site-packages/nlp/builder.py:432: in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n../.local/lib/python3.6/site-packages/nlp/builder.py:466: in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = <nlp.datasets.text.bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b.text.Text object at 0x7efa744ffb70>\r\ndl_manager = <nlp.utils.mock_download_manager.MockDownloadManager object at 0x7efb304c52b0>\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\" The `datafiles` kwarg in load_dataset() can be a str, List[str], Dict[str,str], or Dict[str,List[str]].\r\n \r\n If str or List[str], then the dataset returns only the 'train' split.\r\n If dict, then keys should be from the `nlp.Split` enum.\r\n \"\"\"\r\n if isinstance(self.config.data_files, (str, list, tuple)):\r\n # Handle case with only one split\r\n files = self.config.data_files\r\n if isinstance(files, str):\r\n files = [files]\r\n return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={\"files\": files})]\r\n else:\r\n # Handle case with several splits and a dict mapping\r\n splits = []\r\n for split_name in [nlp.Split.TRAIN, nlp.Split.VALIDATION, nlp.Split.TEST]:\r\n> if split_name in self.config.data_files:\r\nE TypeError: argument of type 'NoneType' is not iterable\r\n\r\n../.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.py:24: TypeError\r\n=============================== warnings summary ===============================\r\n... \r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_text\r\n====== 1 failed, 963 passed, 532 skipped, 5 warnings in 166.33s (0:02:46) ======\r\n\r\nExited with code exit status 1\r\n```",
"@lhoestq Hi Quentin, I was wondering if you could give some feedback on this error from the `run_dataset_script_tests` script. It seems that's coming from a different config builder than the one I added, so I am not sure why this error would occur. Thanks in advance!",
"Awesome! Thank you for all your comments! 👌 I will update the PR in a bit with all the required changes 🙂 \r\n\r\nLet me just provide a bit of context for my changes:\r\n\r\nI was referring to the GLUE, XTREME and WNUT_17 dataset scripts to build mine (not sure if the new documentation was available last week). This is where I took the naming convention for the citation and description variables. Also, these scripts didn't have the `BUILDER_CONFIG_CLASS = LinceConfig` line so I commented this out thinking I didn't need that; I tried this line in my attempts to make the real and dummy data tests pass but it was not helping. \r\n\r\nThe problem I was facing was that the tests were passing a default `BuilderConfig` (i.e., `self.config.name` property was set to `'default'` and my custom properties were not available). This means, for example, that within the `def _info(...)` method, I was not able to access the specific fields of my `LinceConfig` class (which is why I have now a global variable `_LINCE_CITATIONS`, to detach the individual citations from the corresponding LinceConfig objects, as well as I am constructing manually the feature infos). This default `BuilderConfig` is why I added the `if not isinstance(self.config, LinceConfig): return []` statement. Otherwise, accessing custom properties like `self.config.colnames` was failing the test because such properties did not exist in the default config (i.e., it was not a `LinceConfig`).\r\n\r\nI will update the PR and see if these problems happen in the CI tests.\r\n\r\nThanks again for the follow-up! @lhoestq ",
"Ok I see !\r\n\r\nTo give you more details: the line `BUILDER_CONFIG_CLASS = LinceConfig` tells the tests how to instantiate a config for this dataset. Therefore if you have this line you should have all the fields of your config available.\r\n\r\nTo fix the errors you get you'll have to, first, have the `BUILDER_CONFIG_CLASS = LinceConfig` line, and second, add default values for the parameters of your config (or the tests functions will be unable to instantiate it by calling `LinceConfig()`.\r\n\r\nAn example of dataset with a custom config with additional filed like this one is [biomrc](https://github.com/huggingface/nlp/blob/master/datasets/biomrc/biomrc.py).\r\nFeel free to give a look at it if you want.",
"Thanks for the reference!\r\n\r\nI just updated the PR with the suggested changes. It seems the CI failed on the same test you said we could ignore, so I guess it's okay :) \r\n\r\nPlease let me know if there is something else I may need to change."
] | 1,594,506,920,000 | 1,594,916,386,000 | 1,594,916,386,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/383",
"html_url": "https://github.com/huggingface/datasets/pull/383",
"diff_url": "https://github.com/huggingface/datasets/pull/383.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/383.patch"
} | Hi,
First of all, this library is really cool! Thanks for putting all of this together!
This PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https://ritual.uh.edu/lince). As described in the official website (FAQ):
> 1. Why do we need LinCE?
>LinCE brings 10 code-switching datasets together for 4 tasks and 4 language pairs with 5 leaderboards in a single evaluation platform. We examined each dataset and fixed major issues on the partitions (or even define official partitions) with a comprehensive stratification method (see our paper for more details).
>Besides, we believe that online benchmarks like LinCE bring steady research progress and allow to compare state-of-the-art models at the pace of the progress in NLP. We expect to benefit greatly the code-switching community with this benchmark.
The data comes from social media and here's the summary table of tasks per language pair:
| Language Pairs | LID | POS | NER | SA |
|----------------------------------------|-----|-----|-----|----|
| Spanish-English | ✅ | ✅ | ✅ | ✅ |
| Hindi-English | ✅ | ✅ | ✅ | |
| Modern Standard Arabic-Egyptian Arabic | ✅ | | ✅ | |
| Nepali-English | ✅ | | | |
The tasks are as follows:
* LID: token-level language identification
* POS: part-of-speech tagging
* NER: named entity recognition
* SA: sentiment analysis
With the exception of MSA-EA, the rest of the datasets contain token-level LID labels.
## Usage
For Spanish-English LID, we can load the data as follows:
```
import nlp
data = nlp.load_dataset('./datasets/lince/lince.py', 'lid_spaeng')
for split in data:
print(data[split])
```
Here's the output:
```
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 21030)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 3332)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 8289)
```
Here's the list of shortcut names for every dataset available in LinCE:
* `lid_spaeng`
* `lid_hineng`
* `lid_nepeng`
* `lid_msaea`
* `pos_spaeng`
* `pos_hineng`
* `ner_spaeng`
* `ner_hineng`
* `ner_msaea`
* `sa_spaeng`
All the numbers match with Table 3 in the LinCE [paper](https://www.aclweb.org/anthology/2020.lrec-1.223.pdf). Also, note that the MSA-EA datasets use the Persian script while the other datasets use the Roman script.
## Features
Here is how the features look in the case of language identification (LID) tasks:
| LID Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
For part-of-speech (POS) tagging:
| POS Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `pos` | `list<str>` | List of POS tags (string) of a sentence |
For named entity recognition (NER):
| NER Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `ner` | `list<str>` | List of NER labels (string) of a sentence |
**NOTE**: the MSA-EA NER dataset does not contain the `lid` feature.
For sentiment analysis (SA):
| SA Feature | Type | Description |
|---------------------|-------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `sa` | `str` | Sentiment label (string) of a sentence |
| https://api.github.com/repos/huggingface/datasets/issues/383/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/382 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/382/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/382/comments | https://api.github.com/repos/huggingface/datasets/issues/382/events | https://github.com/huggingface/datasets/issues/382 | 655,290,482 | MDU6SXNzdWU2NTUyOTA0ODI= | 382 | 1080 | {
"login": "saq194",
"id": 60942503,
"node_id": "MDQ6VXNlcjYwOTQyNTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/60942503?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saq194",
"html_url": "https://github.com/saq194",
"followers_url": "https://api.github.com/users/saq194/followers",
"following_url": "https://api.github.com/users/saq194/following{/other_user}",
"gists_url": "https://api.github.com/users/saq194/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saq194/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saq194/subscriptions",
"organizations_url": "https://api.github.com/users/saq194/orgs",
"repos_url": "https://api.github.com/users/saq194/repos",
"events_url": "https://api.github.com/users/saq194/events{/privacy}",
"received_events_url": "https://api.github.com/users/saq194/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,506,547,000 | 1,594,507,778,000 | 1,594,507,778,000 | NONE | null | null | https://api.github.com/repos/huggingface/datasets/issues/382/timeline | null | false |
|
https://api.github.com/repos/huggingface/datasets/issues/381 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/381/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/381/comments | https://api.github.com/repos/huggingface/datasets/issues/381/events | https://github.com/huggingface/datasets/issues/381 | 655,277,119 | MDU6SXNzdWU2NTUyNzcxMTk= | 381 | NLp | {
"login": "Spartanthor",
"id": 68147610,
"node_id": "MDQ6VXNlcjY4MTQ3NjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/68147610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Spartanthor",
"html_url": "https://github.com/Spartanthor",
"followers_url": "https://api.github.com/users/Spartanthor/followers",
"following_url": "https://api.github.com/users/Spartanthor/following{/other_user}",
"gists_url": "https://api.github.com/users/Spartanthor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Spartanthor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Spartanthor/subscriptions",
"organizations_url": "https://api.github.com/users/Spartanthor/orgs",
"repos_url": "https://api.github.com/users/Spartanthor/repos",
"events_url": "https://api.github.com/users/Spartanthor/events{/privacy}",
"received_events_url": "https://api.github.com/users/Spartanthor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,500,614,000 | 1,594,500,639,000 | 1,594,500,639,000 | NONE | null | null | https://api.github.com/repos/huggingface/datasets/issues/381/timeline | null | false |
|
https://api.github.com/repos/huggingface/datasets/issues/378 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/378/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/378/comments | https://api.github.com/repos/huggingface/datasets/issues/378/events | https://github.com/huggingface/datasets/issues/378 | 655,226,316 | MDU6SXNzdWU2NTUyMjYzMTY= | 378 | [dataset] Structure of MLQA seems unecessary nested | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Same for the RACE dataset: https://github.com/huggingface/nlp/blob/master/datasets/race/race.py\r\n\r\nShould we scan all the datasets to remove this pattern of un-necessary nesting?",
"You're right, I think we don't need to use the nested dictionary. \r\n"
] | 1,594,480,568,000 | 1,594,829,840,000 | 1,594,829,840,000 | MEMBER | null | null | The features of the MLQA dataset comprise several nested dictionaries with a single element inside (for `questions` and `ids`): https://github.com/huggingface/nlp/blob/master/datasets/mlqa/mlqa.py#L90-L97
Should we keep this @mariamabarham @patrickvonplaten? Was this added for compatibility with tfds?
```python
features=nlp.Features(
{
"context": nlp.Value("string"),
"questions": nlp.features.Sequence({"question": nlp.Value("string")}),
"answers": nlp.features.Sequence(
{"text": nlp.Value("string"), "answer_start": nlp.Value("int32"),}
),
"ids": nlp.features.Sequence({"idx": nlp.Value("string")})
``` | https://api.github.com/repos/huggingface/datasets/issues/378/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/377 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/377/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/377/comments | https://api.github.com/repos/huggingface/datasets/issues/377/events | https://github.com/huggingface/datasets/issues/377 | 655,215,790 | MDU6SXNzdWU2NTUyMTU3OTA= | 377 | Iyy!!! | {
"login": "ajinomoh",
"id": 68154535,
"node_id": "MDQ6VXNlcjY4MTU0NTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/68154535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ajinomoh",
"html_url": "https://github.com/ajinomoh",
"followers_url": "https://api.github.com/users/ajinomoh/followers",
"following_url": "https://api.github.com/users/ajinomoh/following{/other_user}",
"gists_url": "https://api.github.com/users/ajinomoh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ajinomoh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ajinomoh/subscriptions",
"organizations_url": "https://api.github.com/users/ajinomoh/orgs",
"repos_url": "https://api.github.com/users/ajinomoh/repos",
"events_url": "https://api.github.com/users/ajinomoh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ajinomoh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,476,667,000 | 1,594,477,851,000 | 1,594,477,851,000 | NONE | null | null | https://api.github.com/repos/huggingface/datasets/issues/377/timeline | null | false |
|
https://api.github.com/repos/huggingface/datasets/issues/376 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/376/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/376/comments | https://api.github.com/repos/huggingface/datasets/issues/376/events | https://github.com/huggingface/datasets/issues/376 | 655,047,826 | MDU6SXNzdWU2NTUwNDc4MjY= | 376 | to_pandas conversion doesn't always work | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"**Edit**: other topic previously in this message moved to a new issue: https://github.com/huggingface/nlp/issues/387",
"Could you try to update pyarrow to >=0.17.0 ? It should fix the `to_pandas` bug\r\n\r\nAlso I'm not sure that structures like list<struct> are fully supported in the lib (none of the datasets use that).\r\nIt can cause issues when using dataset transforms like `filter` for example"
] | 1,594,416,811,000 | 1,595,239,845,000 | null | MEMBER | null | null | For some complex nested types, the conversion from Arrow to python dict through pandas doesn't seem to be possible.
Here is an example using the official SQUAD v2 JSON file.
This example was found while investigating #373.
```python
>>> squad = load_dataset('json', data_files={nlp.Split.TRAIN: ["./train-v2.0.json"]}, download_mode=nlp.GenerateMode.FORCE_REDOWNLOAD, version="1.0.0", field='data')
>>> squad['train']
Dataset(schema: {'title': 'string', 'paragraphs': 'list<item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>>'}, num_rows: 442)
>>> squad['train'][0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/thomwolf/Documents/GitHub/datasets/src/nlp/arrow_dataset.py", line 589, in __getitem__
format_kwargs=self._format_kwargs,
File "/Users/thomwolf/Documents/GitHub/datasets/src/nlp/arrow_dataset.py", line 529, in _getitem
outputs = self._unnest(self._data.slice(key, 1).to_pandas().to_dict("list"))
File "pyarrow/array.pxi", line 559, in pyarrow.lib._PandasConvertible.to_pandas
File "pyarrow/table.pxi", line 1367, in pyarrow.lib.Table._to_pandas
File "/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/pandas_compat.py", line 766, in table_to_blockmanager
blocks = _table_to_blocks(options, table, categories, ext_columns_dtypes)
File "/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/pandas_compat.py", line 1101, in _table_to_blocks
list(extension_columns.keys()))
File "pyarrow/table.pxi", line 881, in pyarrow.lib.table_to_blocks
File "pyarrow/error.pxi", line 105, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Not implemented type for Arrow list to pandas: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>
```
cc @lhoestq would we have a way to detect this from the schema maybe?
Here is the schema for this pretty complex JSON:
```python
>>> squad['train'].schema
title: string
paragraphs: list<item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>>
child 0, item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>
child 0, qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>
child 0, item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>
child 0, question: string
child 1, id: string
child 2, answers: list<item: struct<text: string, answer_start: int64>>
child 0, item: struct<text: string, answer_start: int64>
child 0, text: string
child 1, answer_start: int64
child 3, is_impossible: bool
child 4, plausible_answers: list<item: struct<text: string, answer_start: int64>>
child 0, item: struct<text: string, answer_start: int64>
child 0, text: string
child 1, answer_start: int64
child 1, context: string
``` | https://api.github.com/repos/huggingface/datasets/issues/376/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/375 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/375/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/375/comments | https://api.github.com/repos/huggingface/datasets/issues/375/events | https://github.com/huggingface/datasets/issues/375 | 655,023,307 | MDU6SXNzdWU2NTUwMjMzMDc= | 375 | TypeError when computing bertscore | {
"login": "willywsm1013",
"id": 13269577,
"node_id": "MDQ6VXNlcjEzMjY5NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/13269577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/willywsm1013",
"html_url": "https://github.com/willywsm1013",
"followers_url": "https://api.github.com/users/willywsm1013/followers",
"following_url": "https://api.github.com/users/willywsm1013/following{/other_user}",
"gists_url": "https://api.github.com/users/willywsm1013/gists{/gist_id}",
"starred_url": "https://api.github.com/users/willywsm1013/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/willywsm1013/subscriptions",
"organizations_url": "https://api.github.com/users/willywsm1013/orgs",
"repos_url": "https://api.github.com/users/willywsm1013/repos",
"events_url": "https://api.github.com/users/willywsm1013/events{/privacy}",
"received_events_url": "https://api.github.com/users/willywsm1013/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"I am not able to reproduce this issue on my side.\r\nCould you give us more details about the inputs you used ?\r\n\r\nI do get another error though:\r\n```\r\n~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/bert_score/utils.py in bert_cos_score_idf(model, refs, hyps, tokenizer, idf_dict, verbose, batch_size, device, all_layers)\r\n 371 return sorted(list(set(l)), key=lambda x: len(x.split(\" \")))\r\n 372 \r\n--> 373 sentences = dedup_and_sort(refs + hyps)\r\n 374 embs = []\r\n 375 iter_range = range(0, len(sentences), batch_size)\r\n\r\nValueError: operands could not be broadcast together with shapes (0,) (2,)\r\n```\r\nThat's because it gets numpy arrays as input and not lists. See #387 ",
"The other issue was fixed by #403 \r\n\r\nDo you still get this issue @willywsm1013 ?\r\n"
] | 1,594,413,464,000 | 1,599,490,212,000 | null | NONE | null | null | Hi,
I installed nlp 0.3.0 via pip, and my python version is 3.7.
When I tried to compute bertscore with the code:
```
import nlp
bertscore = nlp.load_metric('bertscore')
# load hyps and refs
...
print (bertscore.compute(hyps, refs, lang='en'))
```
I got the following error.
```
Traceback (most recent call last):
File "bert_score_evaluate.py", line 16, in <module>
print (bertscore.compute(hyps, refs, lang='en'))
File "/home/willywsm/anaconda3/envs/torcher/lib/python3.7/site-packages/nlp/metric.py", line 200, in compute
output = self._compute(predictions=predictions, references=references, **metrics_kwargs)
File "/home/willywsm/anaconda3/envs/torcher/lib/python3.7/site-packages/nlp/metrics/bertscore/fb176889831bf0ce995ed197edc94b2e9a83f647a869bb8c9477dbb2d04d0f08/bertscore.py", line 105, in _compute
hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline)
TypeError: get_hash() takes 3 positional arguments but 4 were given
```
It seems like there is something wrong with get_hash() function? | https://api.github.com/repos/huggingface/datasets/issues/375/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/374 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/374/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/374/comments | https://api.github.com/repos/huggingface/datasets/issues/374/events | https://github.com/huggingface/datasets/pull/374 | 654,895,066 | MDExOlB1bGxSZXF1ZXN0NDQ3NTMxMzUy | 374 | Add dataset post processing for faiss indexes | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I changed the `wiki_dpr` script to ignore the last 24 examples for now. Hopefully we'll have the full version soon.\r\nThe datasets_infos.json and the data on GCS are updated.\r\n\r\nAnd I also added a check to make sure we don't have post processing resources in sub-directories.",
"I added a dummy config that can be loaded with:\r\n```python\r\nwiki = load_dataset(\"wiki_dpr\", \"dummy_psgs_w100_no_embeddings\", with_index=True, split=\"train\")\r\n```\r\nIt's only 6MB of arrow files and 30MB of index"
] | 1,594,398,359,000 | 1,594,647,843,000 | 1,594,647,841,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/374",
"html_url": "https://github.com/huggingface/datasets/pull/374",
"diff_url": "https://github.com/huggingface/datasets/pull/374.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/374.patch"
} | # Post processing of datasets for faiss indexes
Now that we can have datasets with embeddings (see `wiki_pr` for example), we can allow users to load the dataset + get the Faiss index that comes with it to do nearest neighbors queries.
## Implementation proposition
- Faiss indexes have to be added to the `nlp.Dataset` object, and therefore it's in a different scope that what are doing the `_split_generators` and `_generate_examples` methods of `nlp.DatasetBuilder`. Therefore I added a new method for post processing of the `nlp.Dataset` object called `_post_process` (name could change)
- The role of `_post_process` is to apply dataset transforms (filter/map etc.) or indexing functions (add_faiss_index) to modify/enrich the `nlp.Dataset` object. It is not part of the `download_and_prepare` process (that is focused on arrow files creation) so the post processing is run inside the `as_dataset` method.
- `_post_process` can generate new files (cached files from dataset transforms or serialized faiss indexes) and their names are defined by `_post_processing_resources`
- as we know what are the post processing resources, we can download them automatically from google storage instead of computing them if they're available (as we do for arrow files)
I'd happy to discuss these choices !
## The `wiki_dpr` index
It takes 1h20 and ~7GB of memory to compute. The final index is 1.42GB and takes ~1.5GB of memory.
This is pretty cool given that a naive flat index would take 170GB of memory to store the 21M vectors of dim 768.
I couldn't use directly the Faiss `index_factory` as I needed to set the metric to inner product.
## Example of usage
```python
import nlp
dset = nlp.load_dataset(
"wiki_dpr",
"psgs_w100_with_nq_embeddings",
split="train",
with_index=True
)
print(len(dset), dset.list_indexes()) # (21015300, ['embeddings'])
```
(it also works with the dataset configuration without the embeddings because I added the index file in google storage for this one too)
## Demo
You can also check a demo on google colab that shows how to use it with the DPRQuestionEncoder from transformers:
https://colab.research.google.com/drive/1FakNU8W5EPMcWff7iP1H6REg3XSS0YLp?usp=sharing
| https://api.github.com/repos/huggingface/datasets/issues/374/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/373 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/373/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/373/comments | https://api.github.com/repos/huggingface/datasets/issues/373/events | https://github.com/huggingface/datasets/issues/373 | 654,845,133 | MDU6SXNzdWU2NTQ4NDUxMzM= | 373 | Segmentation fault when loading local JSON dataset as of #372 | {
"login": "vegarab",
"id": 24683907,
"node_id": "MDQ6VXNlcjI0NjgzOTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vegarab",
"html_url": "https://github.com/vegarab",
"followers_url": "https://api.github.com/users/vegarab/followers",
"following_url": "https://api.github.com/users/vegarab/following{/other_user}",
"gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vegarab/subscriptions",
"organizations_url": "https://api.github.com/users/vegarab/orgs",
"repos_url": "https://api.github.com/users/vegarab/repos",
"events_url": "https://api.github.com/users/vegarab/events{/privacy}",
"received_events_url": "https://api.github.com/users/vegarab/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"I've seen this sort of thing before -- it might help to delete the directory -- I've also noticed that there is an error with the json Dataloader for any data I've tried to load. I've replaced it with this, which skips over the data feature population step:\r\n\r\n\r\n```python\r\nimport os\r\n\r\nimport pyarrow.json as paj\r\n\r\nimport nlp as hf_nlp\r\n\r\nfrom nlp import DatasetInfo, BuilderConfig, SplitGenerator, Split, utils\r\nfrom nlp.arrow_writer import ArrowWriter\r\n\r\n\r\nclass JSONDatasetBuilder(hf_nlp.ArrowBasedBuilder):\r\n BUILDER_CONFIG_CLASS = BuilderConfig\r\n\r\n def _info(self):\r\n return DatasetInfo()\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\" We handle string, list and dicts in datafiles\r\n \"\"\"\r\n if isinstance(self.config.data_files, (str, list, tuple)):\r\n files = self.config.data_files\r\n if isinstance(files, str):\r\n files = [files]\r\n return [SplitGenerator(name=Split.TRAIN, gen_kwargs={\"files\": files})]\r\n splits = []\r\n for split_name in [Split.TRAIN, Split.VALIDATION, Split.TEST]:\r\n if split_name in self.config.data_files:\r\n files = self.config.data_files[split_name]\r\n if isinstance(files, str):\r\n files = [files]\r\n splits.append(SplitGenerator(name=split_name, gen_kwargs={\"files\": files}))\r\n return splits\r\n\r\n def _prepare_split(self, split_generator):\r\n fname = \"{}-{}.arrow\".format(self.name, split_generator.name)\r\n fpath = os.path.join(self._cache_dir, fname)\r\n\r\n writer = ArrowWriter(path=fpath)\r\n\r\n generator = self._generate_tables(**split_generator.gen_kwargs)\r\n for key, table in utils.tqdm(generator, unit=\" tables\", leave=False):\r\n writer.write_table(table)\r\n num_examples, num_bytes = writer.finalize()\r\n\r\n split_generator.split_info.num_examples = num_examples\r\n split_generator.split_info.num_bytes = num_bytes\r\n\r\n def _generate_tables(self, files):\r\n for i, file in enumerate(files):\r\n pa_table = paj.read_json(\r\n file\r\n )\r\n yield i, pa_table\r\n\r\n```",
"Yes, deleting the directory solves the error whenever I try to rerun.\r\n\r\nBy replacing the json-loader, you mean the cached file in my `site-packages` directory? e.g. `/home/XXX/.cache/lib/python3.7/site-packages/nlp/datasets/json/(...)/json.py` \r\n\r\nWhen I was testing this out before the #372 PR was merged I had issues installing it properly locally. Since the `json.py` script was downloaded instead of actually using the one provided in the local install. Manually updating that file seemed to solve it, but it didn't seem like a proper solution. Especially when having to run this on a remote compute cluster with no access to that directory.",
"I see, diving in the JSON file for SQuAD it's a pretty complex structure.\r\n\r\nThe best solution for you, if you have a dataset really similar to SQuAD would be to copy and modify the SQuAD data processing script. We will probably add soon an option to be able to specify file path to use instead of the automatic URL encoded in the script but in the meantime you can:\r\n- copy the [squad script](https://github.com/huggingface/nlp/blob/master/datasets/squad/squad.py) in a new script for your dataset\r\n- in the new script replace [these `urls_to_download `](https://github.com/huggingface/nlp/blob/master/datasets/squad/squad.py#L99-L102) by `urls_to_download=self.config.data_files`\r\n- load the dataset with `dataset = load_dataset('path/to/your/new/script', data_files={nlp.Split.TRAIN: \"./datasets/train-v2.0.json\"})`\r\n\r\nThis way you can reuse all the processing logic of the SQuAD loading script.",
"This seems like a more sensible solution! Thanks, @thomwolf. It's been a little daunting to understand what these scripts actually do, due to the level of abstraction and central documentation.\r\n\r\nAm I correct in assuming that the `_generate_examples()` function is the actual procedure for how the data is loaded from file? Meaning that essentially with a file containing another format, that is the only function that requires re-implementation? I'm working with a lot of datasets that, due to licensing and privacy, cannot be published. As this library is so neatly integrated with the transformers library and gives easy access to public sets such as SQUAD and increased performance, it is very neat to be able to load my private sets as well. As of now, I have just been working on scripts for translating all my data into the SQUAD-format before using the json script, but I see that it might not be necessary after all. ",
"Yes `_generate_examples()` is the main entry point. If you change the shape of the returned dictionary you also need to update the `features` in the `_info`.\r\n\r\nI'm currently writing the doc so it should be easier soon to use the library and know how to add your datasets.\r\n",
"Could you try to update pyarrow to >=0.17.0 @vegarab ?\r\nI don't have any segmentation fault with my version of pyarrow (0.17.1)\r\n\r\nI tested with\r\n```python\r\nimport nlp\r\ns = nlp.load_dataset(\"json\", data_files=\"train-v2.0.json\", field=\"data\", split=\"train\")\r\ns[0]\r\n# {'title': 'Normans', 'paragraphs': [{'qas': [{'question': 'In what country is Normandy located?', 'id':...\r\n```",
"Also if you want to have your own dataset script, we now have a new documentation !\r\nSee here:\r\nhttps://huggingface.co/nlp/add_dataset.html",
"@lhoestq \r\nFor some reason, I am not able to reproduce the segmentation fault, on pyarrow==0.16.0. Using the exact same environment and file.\r\n\r\nAnyhow, I discovered that pyarrow>=0.17.0 is required to read in a JSON file where the pandas structs contain lists. Otherwise, pyarrow complains when attempting to cast the struct:\r\n```py\r\nimport nlp\r\n>>> s = nlp.load_dataset(\"json\", data_files=\"datasets/train-v2.0.json\", field=\"data\", split=\"train\")\r\nUsing custom data configuration default\r\n>>> s[0]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/arrow_dataset.py\", line 558, in __getitem__\r\n format_kwargs=self._format_kwargs,\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/arrow_dataset.py\", line 498, in _getitem\r\n outputs = self._unnest(self._data.slice(key, 1).to_pandas().to_dict(\"list\"))\r\n File \"pyarrow/array.pxi\", line 559, in pyarrow.lib._PandasConvertible.to_pandas\r\n File \"pyarrow/table.pxi\", line 1367, in pyarrow.lib.Table._to_pandas\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/pyarrow/pandas_compat.py\", line 766, in table_to_blockmanager\r\n blocks = _table_to_blocks(options, table, categories, ext_columns_dtypes)\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/pyarrow/pandas_compat.py\", line 1101, in _table_to_blocks\r\n list(extension_columns.keys()))\r\n File \"pyarrow/table.pxi\", line 881, in pyarrow.lib.table_to_blocks\r\n File \"pyarrow/error.pxi\", line 105, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowNotImplementedError: Not implemented type for Arrow list to pandas: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>\r\n>>> s\r\nDataset(schema: {'title': 'string', 'paragraphs': 'list<item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>>'}, num_rows: 35)\r\n```\r\n\r\nUpgrading to >=0.17.0 provides the same dataset structure, but accessing the records is possible without the same exception. \r\n\r\n",
"Very happy to see some extended documentation! ",
"#376 seems to be reporting the same issue as mentioned above. ",
"This issue helped me a lot, thanks.\r\nHope this issue will be fixed soon."
] | 1,594,393,465,000 | 1,608,017,240,000 | null | CONTRIBUTOR | null | null | The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault.
```
dataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, field='data')
```
causes
```
Using custom data configuration default
Downloading and preparing dataset json/default (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/XXX/.cache/huggingface/datasets/json/default/0.0.0...
0 tables [00:00, ? tables/s]Segmentation fault (core dumped)
```
where `./datasets/train-v2.0.json` is downloaded directly from https://rajpurkar.github.io/SQuAD-explorer/.
This is consistent with other SQuAD-formatted JSON files.
When attempting to load the dataset again, I get the following:
```
Using custom data configuration default
Traceback (most recent call last):
File "dataloader.py", line 6, in <module>
'json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, field='data')
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/load.py", line 524, in load_dataset
save_infos=save_infos,
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 382, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/home/XXX/.conda/envs/torch/lib/python3.7/contextlib.py", line 112, in __enter__
return next(self.gen)
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 368, in incomplete_dir
os.makedirs(tmp_dir)
File "/home/XXX/.conda/envs/torch/lib/python3.7/os.py", line 223, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/home/XXX/.cache/huggingface/datasets/json/default/0.0.0.incomplete'
```
(Not sure if you wanted this in the previous issue #369 or not as it was closed.) | https://api.github.com/repos/huggingface/datasets/issues/373/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/372 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/372/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/372/comments | https://api.github.com/repos/huggingface/datasets/issues/372/events | https://github.com/huggingface/datasets/pull/372 | 654,774,420 | MDExOlB1bGxSZXF1ZXN0NDQ3NDMzNTA4 | 372 | Make the json script more flexible | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,386,915,000 | 1,594,392,727,000 | 1,594,392,726,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/372",
"html_url": "https://github.com/huggingface/datasets/pull/372",
"diff_url": "https://github.com/huggingface/datasets/pull/372.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/372.patch"
} | Fix https://github.com/huggingface/nlp/issues/359
Fix https://github.com/huggingface/nlp/issues/369
JSON script now can accept JSON files containing a single dict with the records as a list in one attribute to the dict (previously it only accepted JSON files containing records as rows of dicts in the file).
In this case, you should indicate using `field=XXX` the name of the field in the JSON structure which contains the records you want to load. The records can be a dict of lists or a list of dicts.
E.g. to load the SQuAD dataset JSON (without using the `squad` specific dataset loading script), in which the data rows are in the `data` field of the JSON dict, you can do:
```python
from nlp import load_dataset
dataset = load_dataset('json', data_files='/PATH/TO/JSON', field='data')
``` | https://api.github.com/repos/huggingface/datasets/issues/372/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/371 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/371/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/371/comments | https://api.github.com/repos/huggingface/datasets/issues/371/events | https://github.com/huggingface/datasets/pull/371 | 654,668,242 | MDExOlB1bGxSZXF1ZXN0NDQ3MzQ4NDgw | 371 | Fix cached file path for metrics with different config names | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for the fast fix!"
] | 1,594,375,344,000 | 1,594,388,722,000 | 1,594,388,720,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/371",
"html_url": "https://github.com/huggingface/datasets/pull/371",
"diff_url": "https://github.com/huggingface/datasets/pull/371.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/371.patch"
} | The config name was not taken into account to build the cached file path.
It should fix #368 | https://api.github.com/repos/huggingface/datasets/issues/371/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/370 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/370/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/370/comments | https://api.github.com/repos/huggingface/datasets/issues/370/events | https://github.com/huggingface/datasets/pull/370 | 654,304,193 | MDExOlB1bGxSZXF1ZXN0NDQ3MDU3NTIw | 370 | Allow indexing Dataset via np.ndarray | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Looks like a flaky CI, failed download from S3."
] | 1,594,323,795,000 | 1,594,389,944,000 | 1,594,389,943,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/370",
"html_url": "https://github.com/huggingface/datasets/pull/370",
"diff_url": "https://github.com/huggingface/datasets/pull/370.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/370.patch"
} | https://api.github.com/repos/huggingface/datasets/issues/370/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/369 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/369/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/369/comments | https://api.github.com/repos/huggingface/datasets/issues/369/events | https://github.com/huggingface/datasets/issues/369 | 654,186,890 | MDU6SXNzdWU2NTQxODY4OTA= | 369 | can't load local dataset: pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries | {
"login": "vegarab",
"id": 24683907,
"node_id": "MDQ6VXNlcjI0NjgzOTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vegarab",
"html_url": "https://github.com/vegarab",
"followers_url": "https://api.github.com/users/vegarab/followers",
"following_url": "https://api.github.com/users/vegarab/following{/other_user}",
"gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vegarab/subscriptions",
"organizations_url": "https://api.github.com/users/vegarab/orgs",
"repos_url": "https://api.github.com/users/vegarab/repos",
"events_url": "https://api.github.com/users/vegarab/events{/privacy}",
"received_events_url": "https://api.github.com/users/vegarab/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"I am able to reproduce this with the official SQuAD `train-v2.0.json` file downloaded directly from https://rajpurkar.github.io/SQuAD-explorer/",
"I am facing this issue in transformers library 3.0.2 while reading a csv using datasets.\r\nIs this fixed in latest version? \r\nI updated the latest version 4.0.1 but still getting this error. What could cause this error?"
] | 1,594,311,413,000 | 1,608,073,642,000 | 1,594,392,726,000 | CONTRIBUTOR | null | null | Trying to load a local SQuAD-formatted dataset (from a JSON file, about 60MB):
```
dataset = nlp.load_dataset(path='json', data_files={nlp.Split.TRAIN: ["./path/to/file.json"]})
```
causes
```
Traceback (most recent call last):
File "dataloader.py", line 9, in <module>
["./path/to/file.json"]})
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/load.py", line 524, in load_dataset
save_infos=save_infos,
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 483, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 719, in _prepare_split
for key, table in utils.tqdm(generator, unit=" tables", leave=False):
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/datasets/json/88c1bc5c68489f7eda549ed05a5a738527c613b3e7a4ee3524d9d233353a949b/json.py", line 53, in _generate_tables
file, read_options=self.config.pa_read_options, parse_options=self.config.pa_parse_options,
File "pyarrow/_json.pyx", line 191, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)
```
I haven't been able to find any reports of this specific pyarrow error here or elsewhere. | https://api.github.com/repos/huggingface/datasets/issues/369/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/368 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/368/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/368/comments | https://api.github.com/repos/huggingface/datasets/issues/368/events | https://github.com/huggingface/datasets/issues/368 | 654,087,251 | MDU6SXNzdWU2NTQwODcyNTE= | 368 | load_metric can't acquire lock anymore | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I found that, in the same process (or the same interactive session), if I do\r\n\r\nimport nlp\r\n\r\nm1 = nlp.load_metric('glue', 'mrpc')\r\nm2 = nlp.load_metric('glue', 'sst2')\r\n\r\nI will get the same error `ValueError: Cannot acquire lock, caching file might be used by another process, you should setup a unique 'experiment_id'`."
] | 1,594,303,449,000 | 1,594,388,720,000 | 1,594,388,720,000 | NONE | null | null | I can't load metric (glue) anymore after an error in a previous run. I even removed the whole cache folder `/home/XXX/.cache/huggingface/`, and the issue persisted. What are the steps to fix this?
Traceback (most recent call last):
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/metric.py", line 101, in __init__
self.filelock.acquire(timeout=1)
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/filelock.py", line 278, in acquire
raise Timeout(self._lock_file)
filelock.Timeout: The file lock '/home/XXX/.cache/huggingface/metrics/glue/1.0.0/1-glue-0.arrow.lock' could not be acquired.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "examples_huggingface_nlp.py", line 268, in <module>
main()
File "examples_huggingface_nlp.py", line 242, in main
dataset, metric = get_dataset_metric(glue_task)
File "examples_huggingface_nlp.py", line 77, in get_dataset_metric
metric = nlp.load_metric('glue', glue_config, experiment_id=1)
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/load.py", line 440, in load_metric
**metric_init_kwargs,
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/metric.py", line 104, in __init__
"Cannot acquire lock, caching file might be used by another process, "
ValueError: Cannot acquire lock, caching file might be used by another process, you should setup a unique 'experiment_id' for this run.
I0709 15:54:41.008838 139854118430464 filelock.py:318] Lock 139852058030936 released on /home/XXX/.cache/huggingface/metrics/glue/1.0.0/1-glue-0.arrow.lock
| https://api.github.com/repos/huggingface/datasets/issues/368/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/367 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/367/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/367/comments | https://api.github.com/repos/huggingface/datasets/issues/367/events | https://github.com/huggingface/datasets/pull/367 | 654,012,984 | MDExOlB1bGxSZXF1ZXN0NDQ2ODIxNTAz | 367 | Update Xtreme to add PAWS-X es | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,296,877,000 | 1,594,298,231,000 | 1,594,298,230,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/367",
"html_url": "https://github.com/huggingface/datasets/pull/367",
"diff_url": "https://github.com/huggingface/datasets/pull/367.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/367.patch"
} | This PR adds the `PAWS-X.es` in the Xtreme dataset #362 | https://api.github.com/repos/huggingface/datasets/issues/367/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/366 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/366/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/366/comments | https://api.github.com/repos/huggingface/datasets/issues/366/events | https://github.com/huggingface/datasets/pull/366 | 653,954,896 | MDExOlB1bGxSZXF1ZXN0NDQ2NzcyODE2 | 366 | Add quora dataset | {
"login": "ghomasHudson",
"id": 13795113,
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghomasHudson",
"html_url": "https://github.com/ghomasHudson",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Tests seem to be failing because of pandas",
"Kaggle needs authentification to download datasets. We don't have a way to handle that in the lib for now"
] | 1,594,290,862,000 | 1,594,661,721,000 | 1,594,661,721,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/366",
"html_url": "https://github.com/huggingface/datasets/pull/366",
"diff_url": "https://github.com/huggingface/datasets/pull/366.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/366.patch"
} | Added the [Quora question pairs dataset](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs).
Implementation Notes:
- I used the original version provided on the quora website. There's also a [Kaggle competition](https://www.kaggle.com/c/quora-question-pairs) which has a nice train/test split but I can't find an easy way to download it.
- I've made the questions into a list:
```python
{
"questions": [
{"id":0, "text": "Is this an example question?"},
{"id":1, "text": "Is this a sample question?"},
],
...
}
```
rather than:
```python
{
"question1": "Is this an example question?",
"question2": "Is this a sample question?"
"qid0": 0
"qid1": 1
...
}
```
Not sure if this was the right call.
- Can't find a good citation for this dataset | https://api.github.com/repos/huggingface/datasets/issues/366/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/365 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/365/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/365/comments | https://api.github.com/repos/huggingface/datasets/issues/365/events | https://github.com/huggingface/datasets/issues/365 | 653,845,964 | MDU6SXNzdWU2NTM4NDU5NjQ= | 365 | How to augment data ? | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Using batched map is probably the easiest way at the moment.\r\nWhat kind of augmentation would you like to do ?",
"Some samples in the dataset are too long, I want to divide them in several samples.",
"Using batched map is the way to go then.\r\nWe'll make it clearer in the docs that map could be used for augmentation.\r\n\r\nLet me know if you think there should be another way to do it. Or feel free to close the issue otherwise.",
"It just feels awkward to use map to augment data. Also it means it's not possible to augment data in a non-batched way.\r\n\r\nBut to be honest I have no idea of a good API...",
"Or for non-batched samples, how about returning a tuple ?\r\n\r\n```python\r\ndef aug(sample):\r\n # Simply copy the existing data to have x2 amount of data\r\n return sample, sample\r\n\r\ndataset = dataset.map(aug)\r\n```\r\n\r\nIt feels really natural and easy, but :\r\n\r\n* it means the behavior with batched data is different\r\n* I don't know how doable it is backend-wise\r\n\r\n@lhoestq ",
"As we're working with arrow's columnar format we prefer to play with batches that are dictionaries instead of tuples.\r\nIf we have tuple it implies to re-format the data each time we want to write to arrow, which can lower the speed of map for example.\r\n\r\nIt's also a matter of coherence, as we don't want users to be confused whether they have to return dictionaries for some functions and tuples for others when they're doing batches."
] | 1,594,281,157,000 | 1,594,372,327,000 | 1,594,369,335,000 | NONE | null | null | Is there any clean way to augment data ?
For now my work-around is to use batched map, like this :
```python
def aug(samples):
# Simply copy the existing data to have x2 amount of data
for k, v in samples.items():
samples[k].extend(v)
return samples
dataset = dataset.map(aug, batched=True)
``` | https://api.github.com/repos/huggingface/datasets/issues/365/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/364 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/364/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/364/comments | https://api.github.com/repos/huggingface/datasets/issues/364/events | https://github.com/huggingface/datasets/pull/364 | 653,821,597 | MDExOlB1bGxSZXF1ZXN0NDQ2NjY0NzM5 | 364 | add MS MARCO dataset | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The dummy data for v2.1 is missing as far as I can see. I think running the dummy data command should work correctly here. ",
"Also, it might be that the structure of the dummy data is wrong - looking at `generate_examples` the structure does not look too easy.",
"The fact that the dummy data for v2.1 is missing shouldn't make the test fails I think. But as you mention the dummy data structure of v1.1 is wrong. I tried to rename files but it does not solve the issue.",
"Is MS mARCO added to nlp library?I am not able to view it?",
"> Is MS mARCO added to nlp library?I am not able to view it?\r\n\r\nHi @parthplc ,the PR is not merged yet. The dummy data structure is still failing. Maybe @patrickvonplaten can help with it.",
"Dataset is fixed and should be ready for use. @mariamabarham @lhoestq feel free to merge whenever!",
"> Dataset is fixed and should be ready for use. @mariamabarham @lhoestq feel free to merge whenever!\r\n\r\nthanks"
] | 1,594,278,679,000 | 1,596,694,549,000 | 1,596,694,548,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/364",
"html_url": "https://github.com/huggingface/datasets/pull/364",
"diff_url": "https://github.com/huggingface/datasets/pull/364.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/364.patch"
} | This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including:
- Passage and Document Retrieval
- Keyphrase Extraction
- QA and NLG
This PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https://arxiv.org/pdf/1611.09268.pdf
Tests are failing because of the dummy data. I tried to fix it without success. Can you please have a look at it? @patrickvonplaten , @lhoestq | https://api.github.com/repos/huggingface/datasets/issues/364/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/363 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/363/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/363/comments | https://api.github.com/repos/huggingface/datasets/issues/363/events | https://github.com/huggingface/datasets/pull/363 | 653,821,172 | MDExOlB1bGxSZXF1ZXN0NDQ2NjY0NDIy | 363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | {
"login": "eltoto1219",
"id": 14030663,
"node_id": "MDQ6VXNlcjE0MDMwNjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/14030663?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eltoto1219",
"html_url": "https://github.com/eltoto1219",
"followers_url": "https://api.github.com/users/eltoto1219/followers",
"following_url": "https://api.github.com/users/eltoto1219/following{/other_user}",
"gists_url": "https://api.github.com/users/eltoto1219/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eltoto1219/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eltoto1219/subscriptions",
"organizations_url": "https://api.github.com/users/eltoto1219/orgs",
"repos_url": "https://api.github.com/users/eltoto1219/repos",
"events_url": "https://api.github.com/users/eltoto1219/events{/privacy}",
"received_events_url": "https://api.github.com/users/eltoto1219/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thank you! I just marked this as a draft PR. It probably would be better to create specific Array2D and Array3D classes as needed instead of a generic MultiArray for now, it should simplify the code a lot too so, I'll update it as such. Also i was meaning to reply earlier, but I wanted to thank you for the testing script you sent me earlier since it ended up being tremendously helpful. ",
"Okay, I just converted the MultiArray class to Array2D, and got rid of all those \"globals()\"! \r\n\r\nThe main issues I had were that when including a \"pa.ExtensionType\" as a column, the ordinary methods to batch the data would not work and it would throw me some mysterious error, so I first cleaned up my code to order the row to match the schema (because when including extension types the row is disordered ) and then made each row a pa.Table and then concatenated all the tables. Also each n-dimensional vector class we implement will be size invariant which is some good news. ",
"Okay awesome! I just added your suggestions and changed up my recursive functions. \r\n\r\nHere is the traceback for the when I use the original code in the write_on_file method:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 33, in <module>\r\n File \"/home/eltoto/nlp/src/nlp/arrow_writer.py\", line 214, in finalize\r\n self.write_on_file()\r\n File \"/home/eltoto/nlp/src/nlp/arrow_writer.py\", line 134, in write_on_file\r\n pa_array = pa.array(self.current_rows, type=self._type)\r\n File \"pyarrow/array.pxi\", line 269, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 38, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow/error.pxi\", line 106, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>\r\n\r\nshell returned 1\r\n```\r\n\r\nI think when trying to cast an extension array within a list of dictionaries, some method gets called that bugs out Arrow and somehow doesn't get called when adding a single row to a a table and then appending multiple tables together. I tinkered with this for a while but could not find any workaround. \r\n\r\nIn the case that this new method causes bad compression/worse performance, we can explicitly set the batch size in the pa.Table.to_batches(***batch_size***) method, which will return a list of batches. Perhaps, we can check that the batch size is not too large converting the table to batches after X many rows are appended to it by following the batch_size check below.",
"> I think when trying to cast an extension array within a list of dictionaries, some method gets called that bugs out Arrow and somehow doesn't get called when adding a single row to a a table and then appending multiple tables together. I tinkered with this for a while but could not find any workaround.\r\n\r\nIndeed that's weird.\r\n\r\n> In the case that this new method causes bad compression/worse performance, we can explicitly set the batch size in the pa.Table.to_batches(batch_size) method, which will return a list of batches. Perhaps, we can check that the batch size is not too large converting the table to batches after X many rows are appended to it by following the batch_size check below.\r\n\r\nThe argument of `pa.Table.to_batches` is not `batch_size` but `max_chunksize`, which means that right now it would have no effects (each chunk is of length 1).\r\n\r\nWe can fix that just by doing `entries.combine_chunks().to_batches(batch_size)`. In that case it would write by chunk of 1000 which is what we want. I don't think it will slow down the writing by much, but we may have to do a benchmark just to make sure. If speed is ok we could even replace the original code to always write chunks this way.\r\n\r\nDo you still have errors that need to be fixed ?",
"@lhoestq Nope all should be good! \r\n\r\nWould you like me to add the entries.combine_chunks().to_batch_size() code + benchmark?",
"> @lhoestq Nope all should be good!\r\n\r\nAwesome :)\r\n\r\nI think it would be good to start to add some tests then.\r\nYou already have `test_multi_array.py` which is a good start, maybe you can place it in /tests and make it a `unittest.TestCase` ?\r\n\r\n> Would you like me to add the entries.combine_chunks().to_batch_size() code + benchmark?\r\n\r\nThat would be interesting. We don't want reading/writing to be the bottleneck of dataset processing for example in terms of speed. Maybe we could test the write + read speed of different datasets:\r\n- write speed + read speed a dataset with `nlp.Array2D` features\r\n- write speed + read speed a dataset with `nlp.Sequence(nlp.Sequence(nlp.Value(\"float32\")))` features\r\n- write speed + read speed a dataset with `nlp.Sequence(nlp.Value(\"float32\"))` features (same data but flatten)\r\nIt will be interesting to see the influence of `.combine_chunks()` on the `Array2D` test too.\r\n\r\nWhat do you think ?",
"Well actually it looks like we're still having the `print(dataset[0])` error no ?",
"I just tested your code to try to understand better.\r\n\r\n\r\n- First thing you must know is that we've switched from `dataset._data.to_pandas` to `dataset._data.to_pydict` by default when we call `dataset[0]` in #423 . Right now it raises an error but it can be fixed by adding this method to `ExtensionArray2D`:\r\n\r\n```python\r\n def to_pylist(self):\r\n return self.to_numpy().tolist()\r\n```\r\n\r\n- Second, I noticed that `ExtensionArray2D.to_numpy()` always return a (5, 5) shape in your example. I thought `ExtensionArray` was for possibly multiple examples and so I was expecting a shape like (1, 5, 5) for example. Did I miss something ?\r\nTherefore when I apply the fix I mentioned (adding to_pylist), it returns one example per row in each image (in your example of 2 images of shape 5x5, I get `len(dataset._data.to_pydict()[\"image\"]) == 10 # True`)\r\n\r\n[EDIT] I changed the reshape step in `ExtensionArray2D.to_numpy()` by\r\n```python\r\nnumpy_arr = numpy_arr.reshape(len(self), *ExtensionArray2D._construct_shape(self.storage))\r\n```\r\nand it did the job: `len(dataset._data.to_pydict()[\"image\"]) == 2 # True`\r\n\r\n- Finally, I was able to make `to_pandas` work though, by implementing custom array dtype in pandas with arrow conversion (I got inspiration from [here](https://gist.github.com/Eastsun/a59fb0438f65e8643cd61d8c98ec4c08) and [here](https://pandas.pydata.org/pandas-docs/version/1.0.0/development/extending.html#compatibility-with-apache-arrow))\r\n\r\nMaybe you could add me in your repo so I can open a PR to add these changes to your branch ?",
"`combine_chunks` doesn't seem to work btw:\r\n`ArrowNotImplementedError: concatenation of extension<arrow.py_extension_type>`",
"> > @lhoestq Nope all should be good!\r\n> \r\n> Awesome :)\r\n> \r\n> I think it would be good to start to add some tests then.\r\n> You already have `test_multi_array.py` which is a good start, maybe you can place it in /tests and make it a `unittest.TestCase` ?\r\n> \r\n> > Would you like me to add the entries.combine_chunks().to_batch_size() code + benchmark?\r\n> \r\n> That would be interesting. We don't want reading/writing to be the bottleneck of dataset processing for example in terms of speed. Maybe we could test the write + read speed of different datasets:\r\n> \r\n> * write speed + read speed a dataset with `nlp.Array2D` features\r\n> * write speed + read speed a dataset with `nlp.Sequence(nlp.Sequence(nlp.Value(\"float32\")))` features\r\n> * write speed + read speed a dataset with `nlp.Sequence(nlp.Value(\"float32\"))` features (same data but flatten)\r\n> It will be interesting to see the influence of `.combine_chunks()` on the `Array2D` test too.\r\n> \r\n> What do you think ?\r\n\r\nYa! that should be no problem at all, Ill use the timeit module and get back to you with the results sometime over the weekend.",
"Thank you for all your help getting the pandas and row indexing for the dataset to work! For `print(dataset[0])`, I considered the workaround of doing `print(dataset[\"col_name\"][0])` a temporary solution, but ya, I was never able to figure out how to previously get it to work. I'll add you to my repo right now, let me know if you do not see the invite. Also lastly, it is strange how the to_batches method is not working, so I can check that out while I add some speed tests + add the multi dim test under the unit tests this weekend. ",
"I created the PR :)\r\nI also tested `to_batches` and it works on my side",
"Sorry for the bit of delay! I just added the tests, the PR into my fork, and some speed tests. It should be fairly easy to add more tests if we need. Do you think there is anything else to checkout?",
"Cool thanks for adding the tests :) \r\n\r\nNext step is merge master into this branch.\r\nNot sure I understand what you did in your last commit, but it looks like you discarded all the changes from master ^^'\r\n\r\nWe've done some changes in the features logic on master, so let me know if you need help merging it.\r\n\r\nAs soon as we've merged from master, we'll have to make sure that we have extensive tests and we'll be good to do !\r\nAbout the lxmert dataset, we can probably keep it for another PR as soon as we have working 2d features. What do you think ?",
"We might want to merge this after tomorrow's release though to avoid potential side effects @lhoestq ",
"Yep I'm sure we can have it not for tomorrow's release but for the next one ;)",
"haha, when I tried to rebase, I ran into some conflicts. In that last commit, I restored the features.py from the previous commit on the branch in my fork because upon updating to master, the pandasdtypemanger and pandas extension types disappeared. If you actually could help me with merging in what is needed, that would actually help a lot. \r\n\r\nOther than that, ya let me go ahead and move the dataloader code out of this PR. Perhaps we could discuss in the slack channelk soon about what to do with that because we can either just support the pretraining corpus for lxmert or try to implement the full COCO and visual genome datasets (+VQA +GQA) which im sure people would be pretty happy about. \r\n\r\nAlso we can talk more tests soon too when you are free. \r\n\r\nGoodluck on the release tomorrow guys!",
"Not sure why github thinks there are conflicts here, as I just rebased from the current master branch.\r\nMerging into master locally works on my side without conflicts\r\n```\r\ngit checkout master\r\ngit reset --hard origin/master\r\ngit merge --no-ff eltoto1219/support_multi_dim_tensors_for_images\r\nMerge made by the 'recursive' strategy.\r\n datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py | 89 +++++++++++++++++++++++++++++++++++++\r\n datasets/lxmert_pretraining_beta/test_multi_array.py | 45 +++++++++++++++++++\r\n datasets/lxmert_pretraining_beta/to_arrow_data.py | 371 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n src/nlp/arrow_dataset.py | 24 +++++-----\r\n src/nlp/arrow_writer.py | 22 ++++++++--\r\n src/nlp/features.py | 229 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---\r\n tests/test_array_2d.py | 210 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n 7 files changed, 969 insertions(+), 21 deletions(-)\r\n create mode 100644 datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py\r\n create mode 100644 datasets/lxmert_pretraining_beta/test_multi_array.py\r\n create mode 100644 datasets/lxmert_pretraining_beta/to_arrow_data.py\r\n create mode 100644 tests/test_array_2d.py\r\n```",
"I put everything inside one commit from the master branch but the merge conflicts on github'side were still there for some reason.\r\nClosing and re-opening the PR fixed the conflict check on github's side.",
"Almost done ! It still needs a pass on the docs/comments and maybe a few more tests.\r\n\r\nI had to do several changes for type inference in the ArrowWriter to make it support custom types.",
"Ok this is now ready for review ! Thanks for your awesome work in this @eltoto1219 \r\n\r\nSummary of the changes:\r\n- added new feature type `Array2D`, that can be instantiated like `Array2D(\"float32\")` for example\r\n- added pyarrow extension type `Array2DExtensionType` and array `Array2DExtensionArray` that take care of converting from and to arrow. `Array2DExtensionType`'s storage is a list of list of any pyarrow array.\r\n- added pandas extension type `PandasArrayExtensionType` and array `PandasArrayExtensionArray` for conversion from and to arrow/python objects\r\n- refactor of the `ArrowWriter` write and write_batch functions to support extension types while preserving the type inference behavior.\r\n- added a utility object `TypedSequence` that is helpful to combine extension arrays and type inference inside the writer's methods.\r\n- added speed test for sequences writing (printed as warnings in pytest)\r\n- breaking: set disable_nullable to False by default as pyarrow's type inference returns nullable fields\r\n\r\nAnd there are plenty of new tests, mainly in `test_array2d.py` and `test_arrow_writer.py`.\r\n\r\nNote that there are some collisions in `arrow_dataset.py` with #513 so let's be careful when we'll merge this one.\r\n\r\nI know this is a big PR so feel free to ask questions",
"I'll add Array3D, 4D.. tomorrow but it should take only a few lines. The rest won't change",
"I took your comments into account and I added Array[3-5]D.\r\nI changed the storage type to fixed lengths lists. I had to update the `to_numpy` function because of that. Indeed slicing a FixedLengthListArray returns a view a of the original array, while in the previous case slicing a ListArray copies the storage.\r\n"
] | 1,594,278,630,000 | 1,598,263,175,000 | 1,598,263,175,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/363",
"html_url": "https://github.com/huggingface/datasets/pull/363",
"diff_url": "https://github.com/huggingface/datasets/pull/363.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/363.patch"
} | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | https://api.github.com/repos/huggingface/datasets/issues/363/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/362 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/362/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/362/comments | https://api.github.com/repos/huggingface/datasets/issues/362/events | https://github.com/huggingface/datasets/issues/362 | 653,766,245 | MDU6SXNzdWU2NTM3NjYyNDU= | 362 | [dateset subset missing] xtreme paws-x | {
"login": "jerryIsHere",
"id": 50871412,
"node_id": "MDQ6VXNlcjUwODcxNDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jerryIsHere",
"html_url": "https://github.com/jerryIsHere",
"followers_url": "https://api.github.com/users/jerryIsHere/followers",
"following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}",
"gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions",
"organizations_url": "https://api.github.com/users/jerryIsHere/orgs",
"repos_url": "https://api.github.com/users/jerryIsHere/repos",
"events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}",
"received_events_url": "https://api.github.com/users/jerryIsHere/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"You're right, thanks for pointing it out. We will update it "
] | 1,594,271,094,000 | 1,594,298,322,000 | 1,594,298,322,000 | CONTRIBUTOR | null | null | I tried nlp.load_dataset('xtreme', 'PAWS-X.es') but get the value error
It turns out that the subset for Spanish is missing
https://github.com/google-research-datasets/paws/tree/master/pawsx | https://api.github.com/repos/huggingface/datasets/issues/362/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/361 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/361/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/361/comments | https://api.github.com/repos/huggingface/datasets/issues/361/events | https://github.com/huggingface/datasets/issues/361 | 653,757,376 | MDU6SXNzdWU2NTM3NTczNzY= | 361 | 🐛 [Metrics] ROUGE is non-deterministic | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi, can you give a full self-contained example to reproduce this behavior?",
"> Hi, can you give a full self-contained example to reproduce this behavior?\r\n\r\nThere is a notebook in the post ;)",
"> If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different.\r\n> \r\n> Refer to [this Colab notebook](https://colab.research.google.com/drive/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem.\r\n> \r\n> Example of F-score for ROUGE-1, ROUGE-2, ROUGE-L in 2 differents run :\r\n> \r\n> > ['0.3350', '0.1470', '0.2329']\r\n> > ['0.3358', '0.1451', '0.2332']\r\n> \r\n> Why ROUGE is not deterministic ?\r\n\r\nThis is because of rouge's `BootstrapAggregator` that uses sampling to get confidence intervals (low, mid, high).\r\nYou can get deterministic scores per sentence pair by using\r\n```python\r\nscore = rouge.compute(rouge_types=[\"rouge1\", \"rouge2\", \"rougeL\"], use_agregator=False)\r\n```\r\nOr you can set numpy's random seed if you still want to use the aggregator.",
"Maybe we can set all the random seeds of numpy/torch etc. while running `metric.compute` ?",
"We should probably indeed!",
"Now if you re-run the notebook, the two printed results are the same @colanim\r\n```\r\n['0.3356', '0.1466', '0.2318']\r\n['0.3356', '0.1466', '0.2318']\r\n```\r\nHowever across sessions, the results may change (as numpy's random seed can be different). You can prevent that by setting your seed:\r\n```python\r\nrouge = nlp.load_metric('rouge', seed=42)\r\n```"
] | 1,594,269,577,000 | 1,595,288,917,000 | 1,595,288,917,000 | NONE | null | null | If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different.
Refer to [this Colab notebook](https://colab.research.google.com/drive/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem.
Example of F-score for ROUGE-1, ROUGE-2, ROUGE-L in 2 differents run :
> ['0.3350', '0.1470', '0.2329']
['0.3358', '0.1451', '0.2332']
---
Why ROUGE is not deterministic ? | https://api.github.com/repos/huggingface/datasets/issues/361/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/360 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/360/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/360/comments | https://api.github.com/repos/huggingface/datasets/issues/360/events | https://github.com/huggingface/datasets/issues/360 | 653,687,176 | MDU6SXNzdWU2NTM2ODcxNzY= | 360 | [Feature request] Add dataset.ragged_map() function for many-to-many transformations | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Actually `map(batched=True)` can already change the size of the dataset.\r\nIt can accept examples of length `N` and returns a batch of length `M` (can be null or greater than `N`).\r\n\r\nI'll make that explicit in the doc that I'm currently writing.",
"You're two steps ahead of me :) In my testing, it also works if `M` < `N`.\r\n\r\nA batched map of different length seems to work if you directly overwrite all of the original keys, but fails if any of the original keys are preserved.\r\n\r\nFor example,\r\n```python\r\n# Create a dummy dataset\r\ndset = load_dataset(\"wikitext\", \"wikitext-2-raw-v1\")[\"test\"]\r\ndset = dset.map(lambda ex: {\"length\": len(ex[\"text\"]), \"foo\": 1})\r\n\r\n# Do an allreduce on each batch, overwriting both keys\r\ndset.map(lambda batch: {\"length\": [sum(batch[\"length\"])], \"foo\": [1]})\r\n# Dataset(schema: {'length': 'int64', 'foo': 'int64'}, num_rows: 5)\r\n\r\n# Now attempt an allreduce without touching the `foo` key\r\ndset.map(lambda batch: {\"length\": [sum(batch[\"length\"])]})\r\n# This fails with the error message below\r\n```\r\n\r\n```bash\r\n File \"/path/to/nlp/src/nlp/arrow_dataset.py\", line 728, in map\r\n arrow_schema = pa.Table.from_pydict(test_output).schema\r\n File \"pyarrow/io.pxi\", line 1532, in pyarrow.lib.Codec.detect\r\n File \"pyarrow/table.pxi\", line 1503, in pyarrow.lib.Table.from_arrays\r\n File \"pyarrow/public-api.pxi\", line 390, in pyarrow.lib.pyarrow_wrap_table\r\n File \"pyarrow/error.pxi\", line 85, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Column 1 named foo expected length 1 but got length 2\r\n```\r\n\r\nAdding the `remove_columns=[\"length\", \"foo\"]` argument to `map()` solves the issue. Leaving the above error for future visitors. Perfect, thank you!"
] | 1,594,256,683,000 | 1,594,323,111,000 | 1,594,323,111,000 | CONTRIBUTOR | null | null | `dataset.map()` enables one-to-one transformations. Input one example and output one example. This is helpful for tokenizing and cleaning individual lines.
`dataset.filter()` enables one-to-(one-or-none) transformations. Input one example and output either zero/one example. This is helpful for removing portions from the dataset.
However, some dataset transformations are many-to-many. Consider constructing BERT training examples from a dataset of sentences, where you map `["a", "b", "c"] -> ["a[SEP]b", "a[SEP]c", "b[SEP]c", "c[SEP]b", ...]`
I propose a more general `ragged_map()` method that takes in a batch of examples of length `N` and return a batch of examples `M`. This is different from the `map(batched=True)` method, which takes examples of length `N` and returns a batch of length `N`, processing individual examples in parallel. I don't have a clear vision of how this would be implemented efficiently and lazily, but would love to hear the community's feedback on this.
My specific use case is creating an end-to-end ELECTRA data pipeline. I would like to take the raw WikiText data and generate training examples from this using the `ragged_map()` method, then export to TFRecords and train quickly. This would be a reproducible pipeline with no bash scripts. Currently I'm relying on scripts like https://github.com/google-research/electra/blob/master/build_pretraining_dataset.py, which are less general.
| https://api.github.com/repos/huggingface/datasets/issues/360/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/359 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/359/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/359/comments | https://api.github.com/repos/huggingface/datasets/issues/359/events | https://github.com/huggingface/datasets/issues/359 | 653,656,279 | MDU6SXNzdWU2NTM2NTYyNzk= | 359 | ArrowBasedBuilder _prepare_split parse_schema breaks on nested structures | {
"login": "timothyjlaurent",
"id": 2000204,
"node_id": "MDQ6VXNlcjIwMDAyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timothyjlaurent",
"html_url": "https://github.com/timothyjlaurent",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",
"gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions",
"organizations_url": "https://api.github.com/users/timothyjlaurent/orgs",
"repos_url": "https://api.github.com/users/timothyjlaurent/repos",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"received_events_url": "https://api.github.com/users/timothyjlaurent/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi, it depends on what it is in your `dataset_builder.py` file. Can you share it?\r\n\r\nIf you are just loading `json` files, you can also directly use the `json` script (which will find the schema/features from your JSON structure):\r\n\r\n```python\r\nfrom nlp import load_dataset\r\nds = load_dataset(\"json\", data_files=rel_datafiles)\r\n```",
"The behavior I'm seeing is from the `json` script. \r\nI hacked this together to overcome the error with the `JSON` dataloader\r\n\r\n```\r\nclass DatasetBuilder(hf_nlp.ArrowBasedBuilder):\r\n BUILDER_CONFIG_CLASS = BuilderConfig\r\n\r\n def _info(self):\r\n return DatasetInfo()\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\" We handle string, list and dicts in datafiles\r\n \"\"\"\r\n if isinstance(self.config.data_files, (str, list, tuple)):\r\n files = self.config.data_files\r\n if isinstance(files, str):\r\n files = [files]\r\n return [SplitGenerator(name=Split.TRAIN, gen_kwargs={\"files\": files})]\r\n splits = []\r\n for split_name in [Split.TRAIN, Split.VALIDATION, Split.TEST]:\r\n if split_name in self.config.data_files:\r\n files = self.config.data_files[split_name]\r\n if isinstance(files, str):\r\n files = [files]\r\n splits.append(SplitGenerator(name=split_name, gen_kwargs={\"files\": files}))\r\n return splits\r\n\r\n def _prepare_split(self, split_generator):\r\n fname = \"{}-{}.arrow\".format(self.name, split_generator.name)\r\n fpath = os.path.join(self._cache_dir, fname)\r\n\r\n writer = ArrowWriter(path=fpath)\r\n\r\n generator = self._generate_tables(**split_generator.gen_kwargs)\r\n for key, table in utils.tqdm(generator, unit=\" tables\", leave=False):\r\n writer.write_table(table)\r\n num_examples, num_bytes = writer.finalize()\r\n\r\n split_generator.split_info.num_examples = num_examples\r\n split_generator.split_info.num_bytes = num_bytes\r\n # this is where the error is coming from\r\n # def parse_schema(schema, schema_dict):\r\n # for field in schema:\r\n # if pa.types.is_struct(field.type):\r\n # schema_dict[field.name] = {}\r\n # parse_schema(field.type, schema_dict[field.name])\r\n # elif pa.types.is_list(field.type) and pa.types.is_struct(field.type.value_type):\r\n # schema_dict[field.name] = {}\r\n # parse_schema(field.type.value_type, schema_dict[field.name])\r\n # else:\r\n # schema_dict[field.name] = Value(str(field.type))\r\n # \r\n # parse_schema(writer.schema, features)\r\n # self.info.features = Features(features)\r\n\r\n def _generate_tables(self, files):\r\n for i, file in enumerate(files):\r\n pa_table = paj.read_json(\r\n file\r\n )\r\n yield i, pa_table\r\n```\r\n\r\nSo I basically just don't populate the `self.info.features` though this doesn't seem to cause any problems in my downstream applications. \r\n\r\nThe other workaround I was doing was to just use pyarrow.json to build a table and then to create the Dataset with its constructor or from_table methods. `load_dataset` has nice split logic, so I'd prefer to use that.\r\n\r\n",
"Also noticed that if you for example in a loader script\r\n\r\n```\r\nfrom nlp import ArrowBasedBuilder\r\n\r\nclass MyBuilder(ArrowBasedBuilder):\r\n...\r\n\r\n```\r\nand use that in the subclass, it will be on the module's __dict__ and will be selected before the `MyBuilder` subclass, and it will raise `NotImplementedError` on its `_generate_examples` method... In the code it check for abstract classes but Builder and ArrowBasedBuilder aren't abstract classes, they're regular classes with `@abstract_methods`.",
"Indeed this is part of a more general limitation which is the fact that we should generate and update the `features` from the auto-inferred Arrow schema when they are not provided (also happen when a user change the schema using `map()`, the features should be auto-generated and guessed as much as possible to keep the `features` synced with the underlying Arrow table schema).\r\n\r\nWe will try to solve this soon."
] | 1,594,250,645,000 | 1,594,392,726,000 | 1,594,392,726,000 | NONE | null | null | I tried using the Json dataloader to load some JSON lines files. but get an exception in the parse_schema function.
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-23-9aecfbee53bd> in <module>
55 from nlp import load_dataset
56
---> 57 ds = load_dataset("../text2struct/model/dataset_builder.py", data_files=rel_datafiles)
58
59
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
522 download_mode=download_mode,
523 ignore_verifications=ignore_verifications,
--> 524 save_infos=save_infos,
525 )
526
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
430 verify_infos = not save_infos and not ignore_verifications
431 self._download_and_prepare(
--> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
433 )
434 # Sync info
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
481 try:
482 # Prepare split will record examples associated to the split
--> 483 self._prepare_split(split_generator, **prepare_split_kwargs)
484 except OSError:
485 raise OSError("Cannot find data file. " + (self.manual_download_instructions or ""))
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _prepare_split(self, split_generator)
736 schema_dict[field.name] = Value(str(field.type))
737
--> 738 parse_schema(writer.schema, features)
739 self.info.features = Features(features)
740
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in parse_schema(schema, schema_dict)
734 parse_schema(field.type.value_type, schema_dict[field.name])
735 else:
--> 736 schema_dict[field.name] = Value(str(field.type))
737
738 parse_schema(writer.schema, features)
<string> in __init__(self, dtype, id, _type)
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in __post_init__(self)
55
56 def __post_init__(self):
---> 57 self.pa_type = string_to_arrow(self.dtype)
58
59 def __call__(self):
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in string_to_arrow(type_str)
32 if str(type_str + "_") not in pa.__dict__:
33 raise ValueError(
---> 34 f"Neither {type_str} nor {type_str + '_'} seems to be a pyarrow data type. "
35 f"Please make sure to use a correct data type, see: "
36 f"https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions"
ValueError: Neither list<item: string> nor list<item: string>_ seems to be a pyarrow data type. Please make sure to use a correct data type, see: https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions
```
If I create the dataset imperatively, using a pyarrow table, the dataset is created correctly. If I override the `_prepare_split` method to avoid calling the validate schema, the dataset can load as well. | https://api.github.com/repos/huggingface/datasets/issues/359/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/358 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/358/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/358/comments | https://api.github.com/repos/huggingface/datasets/issues/358/events | https://github.com/huggingface/datasets/pull/358 | 653,645,121 | MDExOlB1bGxSZXF1ZXN0NDQ2NTI0NjQ5 | 358 | Starting to add some real doc | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Ok this is starting to be really big so it's probably good to merge this first version of the doc and continue in another PR :)\r\n\r\nThis first version of the doc can be explored here: https://2219-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html"
] | 1,594,248,783,000 | 1,594,720,697,000 | 1,594,720,695,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/358",
"html_url": "https://github.com/huggingface/datasets/pull/358",
"diff_url": "https://github.com/huggingface/datasets/pull/358.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/358.patch"
} | Adding a lot of documentation for:
- load a dataset
- explore the dataset object
- process data with the dataset
- add a new dataset script
- share a dataset script
- full package reference
This version of the doc can be explored here: https://2219-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html
Also:
- fix a bug in `train_test_split`
- update the `csv` script
- add a verbose argument to the dataset processing methods
Still missing:
- doc for the metrics
- how to directly upload a community provided dataset with the CLI
- clean up more docstrings
- add the `features` argument to `load_dataset` (should be another PR) | https://api.github.com/repos/huggingface/datasets/issues/358/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/357 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/357/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/357/comments | https://api.github.com/repos/huggingface/datasets/issues/357/events | https://github.com/huggingface/datasets/pull/357 | 653,642,292 | MDExOlB1bGxSZXF1ZXN0NDQ2NTIyMzU2 | 357 | Add hashes to cnn_dailymail | {
"login": "jbragg",
"id": 2238344,
"node_id": "MDQ6VXNlcjIyMzgzNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbragg",
"html_url": "https://github.com/jbragg",
"followers_url": "https://api.github.com/users/jbragg/followers",
"following_url": "https://api.github.com/users/jbragg/following{/other_user}",
"gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbragg/subscriptions",
"organizations_url": "https://api.github.com/users/jbragg/orgs",
"repos_url": "https://api.github.com/users/jbragg/repos",
"events_url": "https://api.github.com/users/jbragg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jbragg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Looks you to me :)\r\n\r\nCould you also update the json file that goes with the dataset script by doing \r\n```\r\nnlp-cli test ./datasets/cnn_dailymail --save_infos --all_configs\r\n```\r\nIt will update the features metadata and the size of the dataset with your changes.",
"@lhoestq I ran that command.\r\n\r\nThanks for the helpful repository!"
] | 1,594,248,321,000 | 1,594,649,798,000 | 1,594,649,798,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/357",
"html_url": "https://github.com/huggingface/datasets/pull/357",
"diff_url": "https://github.com/huggingface/datasets/pull/357.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/357.patch"
} | The URL hashes are helpful for comparing results from other sources. | https://api.github.com/repos/huggingface/datasets/issues/357/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/356 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/356/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/356/comments | https://api.github.com/repos/huggingface/datasets/issues/356/events | https://github.com/huggingface/datasets/pull/356 | 653,537,388 | MDExOlB1bGxSZXF1ZXN0NDQ2NDM3MDQ5 | 356 | Add text dataset | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,236,113,000 | 1,594,390,743,000 | 1,594,390,743,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/356",
"html_url": "https://github.com/huggingface/datasets/pull/356",
"diff_url": "https://github.com/huggingface/datasets/pull/356.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/356.patch"
} | Usage:
```python
from nlp import load_dataset
dset = load_dataset("text", data_files="/path/to/file.txt")["train"]
```
I created a dummy_data.zip which contains three files: `train.txt`, `test.txt`, `dev.txt`. Each of these contains two lines. It passes
```bash
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_text
```
but I would like a second set of eyes to ensure I did it right.
| https://api.github.com/repos/huggingface/datasets/issues/356/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/355 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/355/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/355/comments | https://api.github.com/repos/huggingface/datasets/issues/355/events | https://github.com/huggingface/datasets/issues/355 | 653,451,013 | MDU6SXNzdWU2NTM0NTEwMTM= | 355 | can't load SNLI dataset | {
"login": "jxmorris12",
"id": 13238952,
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jxmorris12",
"html_url": "https://github.com/jxmorris12",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I just added the processed files of `snli` on our google storage, so that when you do `load_dataset` it can download the processed files from there :)\r\n\r\nWe are thinking about having available those processed files for more datasets in the future, because sometimes files aren't available (like for `snli`), or the download speed is too slow, or sometimes the files take time to be processed.",
"Closing this one. Feel free to re-open if you have other questions :)",
"Thank you!"
] | 1,594,227,254,000 | 1,595,049,357,000 | 1,594,799,941,000 | CONTRIBUTOR | null | null | `nlp` seems to load `snli` from some URL based on nlp.stanford.edu. This subdomain is frequently down -- including right now, when I'd like to load `snli` in a Colab notebook, but can't.
Is there a plan to move these datasets to huggingface servers for a more stable solution?
Btw, here's the stack trace:
```
File "/content/nlp/src/nlp/builder.py", line 432, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/content/nlp/src/nlp/builder.py", line 466, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/content/nlp/src/nlp/datasets/snli/e417f6f2e16254938d977a17ed32f3998f5b23e4fcab0f6eb1d28784f23ea60d/snli.py", line 76, in _split_generators
dl_dir = dl_manager.download_and_extract(_DATA_URL)
File "/content/nlp/src/nlp/utils/download_manager.py", line 217, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/content/nlp/src/nlp/utils/download_manager.py", line 156, in download
lambda url: cached_path(url, download_config=self._download_config,), url_or_urls,
File "/content/nlp/src/nlp/utils/py_utils.py", line 190, in map_nested
return function(data_struct)
File "/content/nlp/src/nlp/utils/download_manager.py", line 156, in <lambda>
lambda url: cached_path(url, download_config=self._download_config,), url_or_urls,
File "/content/nlp/src/nlp/utils/file_utils.py", line 198, in cached_path
local_files_only=download_config.local_files_only,
File "/content/nlp/src/nlp/utils/file_utils.py", line 356, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://nlp.stanford.edu/projects/snli/snli_1.0.zip
``` | https://api.github.com/repos/huggingface/datasets/issues/355/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/354 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/354/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/354/comments | https://api.github.com/repos/huggingface/datasets/issues/354/events | https://github.com/huggingface/datasets/pull/354 | 653,357,617 | MDExOlB1bGxSZXF1ZXN0NDQ2MjkyMTc4 | 354 | More faiss control | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> Ok, so we're getting rid of the `FaissGpuOptions`?\r\n\r\nWe support `device=...` because it's simple, but faiss GPU options can be used in so many ways (you can set different gpu options for the different parts of your index for example) that it's probably better to let the user create and configure its index and then use `custom_index=...`"
] | 1,594,219,520,000 | 1,594,288,494,000 | 1,594,288,491,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/354",
"html_url": "https://github.com/huggingface/datasets/pull/354",
"diff_url": "https://github.com/huggingface/datasets/pull/354.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/354.patch"
} | Allow users to specify a faiss index they created themselves, as sometimes indexes can be composite for examples | https://api.github.com/repos/huggingface/datasets/issues/354/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/353 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/353/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/353/comments | https://api.github.com/repos/huggingface/datasets/issues/353/events | https://github.com/huggingface/datasets/issues/353 | 653,250,611 | MDU6SXNzdWU2NTMyNTA2MTE= | 353 | [Dataset requests] New datasets for Text Classification | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892884,
"node_id": "MDU6TGFiZWwxOTM1ODkyODg0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted",
"name": "help wanted",
"color": "008672",
"default": true,
"description": "Extra attention is needed"
},
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [
"Pinging @mariamabarham as well",
"- `nlp` has MR! It's called `rotten_tomatoes`\r\n- SST is part of GLUE, or is that just SST-2?\r\n- `nlp` also has `ag_news`, a popular news classification dataset\r\n\r\nI'd also like to see:\r\n- the Yahoo Answers topic classification dataset\r\n- the Kaggle Fake News classification dataset",
"Thanks @jxmorris12 for pointing this out. \r\n\r\nIn glue we only have SST-2 maybe we can add separately SST-1.\r\n",
"This is the homepage for the Amazon dataset: https://www.kaggle.com/datafiniti/consumer-reviews-of-amazon-products\r\n\r\nIs there an easy way to download kaggle datasets programmatically? If so, I can add this one!",
"Hi @jxmorris12 for now I think our `dl_manager` does not download from Kaggle.\r\n@thomwolf , @lhoestq",
"Pretty sure the quora dataset is the same one I implemented here: https://github.com/huggingface/nlp/pull/366",
"Great list. Any idea if Amazon Reviews has been added?\r\n\r\n- ~40 GB of text (sadly no emoji)\r\n- popular MLM pre-training dataset before bigger datasets like WebText https://arxiv.org/abs/1808.01371\r\n- turns out that binarizing the 1-5 star rating leads to great Pos/Neg/Neutral dataset, T5 paper claims to get very high accuracy (98%!) on this with small amount of finetuning https://arxiv.org/abs/2004.14546\r\n\r\nApologies if it's been included (great to see where) and if not, it's one of the better medium/large NLP dataset for semi-supervised learning, albeit a bit out of date. \r\n\r\nThanks!! \r\n\r\ncc @sshleifer ",
"On the Amazon Reviews dataset, the original UCSD website has noted these are now updated to include product reviews through 2018 -- actually quite recent compared to many other datasets. Almost certainly the largest NLP dataset out there with labels!\r\nhttps://jmcauley.ucsd.edu/data/amazon/ \r\n\r\nAny chance someone has time to onboard this dataset in a HF way?\r\n\r\ncc @sshleifer "
] | 1,594,210,678,000 | 1,603,165,283,000 | null | MEMBER | null | null | We are missing a few datasets for Text Classification which is an important field.
Namely, it would be really nice to add:
- TREC-6 dataset (see here for instance: https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html#torchnlp.datasets.trec_dataset) **[done]**
- Yelp-5
- Movie review (Movie Review (MR) dataset [156]) **[done (same as rotten_tomatoes)]**
- SST (Stanford Sentiment Treebank) **[include in glue]**
- Multi-Perspective Question Answering (MPQA) dataset **[require authentication (indeed manual download)]**
- Amazon. This is a popular corpus of product reviews collected from the Amazon website [159]. It contains labels for both binary classification and multi-class (5-class) classification
- 20 Newsgroups. The 20 Newsgroups dataset **[done]**
- Sogou News dataset **[done]**
- Reuters news. The Reuters-21578 dataset [165] **[done]**
- DBpedia. The DBpedia dataset [170]
- Ohsumed. The Ohsumed collection [171] is a subset of the MEDLINE database
- EUR-Lex. The EUR-Lex dataset
- WOS. The Web Of Science (WOS) dataset **[done]**
- PubMed. PubMed [173]
- TREC-QA. TREC-QA
- Quora. The Quora dataset [180]
All these datasets are cited in https://arxiv.org/abs/2004.03705 | https://api.github.com/repos/huggingface/datasets/issues/353/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/352 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/352/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/352/comments | https://api.github.com/repos/huggingface/datasets/issues/352/events | https://github.com/huggingface/datasets/pull/352 | 653,128,883 | MDExOlB1bGxSZXF1ZXN0NDQ2MTA1Mjky | 352 | 🐛[BugFix]fix seqeval | {
"login": "AlongWY",
"id": 20281571,
"node_id": "MDQ6VXNlcjIwMjgxNTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/20281571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlongWY",
"html_url": "https://github.com/AlongWY",
"followers_url": "https://api.github.com/users/AlongWY/followers",
"following_url": "https://api.github.com/users/AlongWY/following{/other_user}",
"gists_url": "https://api.github.com/users/AlongWY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlongWY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlongWY/subscriptions",
"organizations_url": "https://api.github.com/users/AlongWY/orgs",
"repos_url": "https://api.github.com/users/AlongWY/repos",
"events_url": "https://api.github.com/users/AlongWY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlongWY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think this is good but can you detail a bit the behavior before and after your fix?",
"examples:\r\n\r\ninput: `['B', 'I', 'I', 'O', 'B', 'I']`\r\nbefore: `[('B', 0, 0), ('I', 1, 2), ('B', 4, 4), ('I', 5, 5)]`\r\nafter: `[('_', 0, 2), ('_', 4, 5)]`\r\n\r\ninput: `['B-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'O', 'B-ARGM-TIME', 'I-ARGM-TIME']`\r\nbefore: `[('LOC', 0, 2), ('TIME', 4, 5)]`\r\nafter: `[('ARGM-LOC', 0, 2), ('ARGM-TIME', 4, 5)]`\r\n\r\nThis is my test code:\r\n\r\n```python\r\nfrom metrics.seqeval.seqeval import end_of_chunk, start_of_chunk\r\n\r\n\r\ndef before_get_entities(seq, suffix=False):\r\n \"\"\"Gets entities from sequence.\r\n Args:\r\n seq (list): sequence of labels.\r\n Returns:\r\n list: list of (chunk_type, chunk_start, chunk_end).\r\n \"\"\"\r\n if any(isinstance(s, list) for s in seq):\r\n seq = [item for sublist in seq for item in sublist + ['O']]\r\n\r\n prev_tag = 'O'\r\n prev_type = ''\r\n begin_offset = 0\r\n chunks = []\r\n for i, chunk in enumerate(seq + ['O']):\r\n if suffix:\r\n tag = chunk[-1]\r\n type_ = chunk.split('-')[0]\r\n else:\r\n tag = chunk[0]\r\n type_ = chunk.split('-')[-1]\r\n\r\n if end_of_chunk(prev_tag, tag, prev_type, type_):\r\n chunks.append((prev_type, begin_offset, i - 1))\r\n if start_of_chunk(prev_tag, tag, prev_type, type_):\r\n begin_offset = i\r\n prev_tag = tag\r\n prev_type = type_\r\n\r\n return chunks\r\n\r\n\r\ndef after_get_entities(seq, suffix=False):\r\n \"\"\"Gets entities from sequence.\r\n Args:\r\n seq (list): sequence of labels.\r\n Returns:\r\n list: list of (chunk_type, chunk_start, chunk_end).\r\n \"\"\"\r\n if any(isinstance(s, list) for s in seq):\r\n seq = [item for sublist in seq for item in sublist + ['O']]\r\n\r\n prev_tag = 'O'\r\n prev_type = ''\r\n begin_offset = 0\r\n chunks = []\r\n for i, chunk in enumerate(seq + ['O']):\r\n if suffix:\r\n tag = chunk[-1]\r\n type_ = chunk[:-1].rsplit('-', maxsplit=1)[0] or '_'\r\n else:\r\n tag = chunk[0]\r\n type_ = chunk[1:].split('-', maxsplit=1)[-1] or '_'\r\n\r\n if end_of_chunk(prev_tag, tag, prev_type, type_):\r\n chunks.append((prev_type, begin_offset, i - 1))\r\n if start_of_chunk(prev_tag, tag, prev_type, type_):\r\n begin_offset = i\r\n prev_tag = tag\r\n prev_type = type_\r\n\r\n return chunks\r\n\r\n\r\ndef main():\r\n examples_1 = ['B', 'I', 'I', 'O', 'B', 'I']\r\n print(before_get_entities(examples_1))\r\n print(after_get_entities(examples_1))\r\n examples_2 = ['B-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'O', 'B-ARGM-TIME', 'I-ARGM-TIME']\r\n print(before_get_entities(examples_2))\r\n print(after_get_entities(examples_2))\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```",
"And we can get more examples not correct, such as:\r\n\r\ninput: `['B', 'I', 'I-I']`\r\nbefore: `[('B', 0, 0), ('I', 1, 2)]`\r\nafter: `[('_', 0, 1), ('I', 2, 2)]`\r\n\r\ninput: `['B-ARGM-TIME', 'I-ARGM-TIME', 'I-TIME']`\r\nbefore: `[('TIME', 0, 2)]`\r\nafter: `[('ARGM-TIME', 0, 1), ('TIME', 2, 2)]`",
"I think i didn't break any thing. Maybe the checks should be restart?",
"Could you please rebase from master @AlongWY ? This should fix the CI stuff",
"ok, i will do it",
"Indeed the official repo is quite stale. Let's merge it here, thanks @AlongWY "
] | 1,594,199,532,000 | 1,594,888,006,000 | 1,594,888,006,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/352",
"html_url": "https://github.com/huggingface/datasets/pull/352",
"diff_url": "https://github.com/huggingface/datasets/pull/352.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/352.patch"
} | Fix seqeval process labels such as 'B', 'B-ARGM-LOC' | https://api.github.com/repos/huggingface/datasets/issues/352/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/351 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/351/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/351/comments | https://api.github.com/repos/huggingface/datasets/issues/351/events | https://github.com/huggingface/datasets/pull/351 | 652,424,048 | MDExOlB1bGxSZXF1ZXN0NDQ1NDk0NTE4 | 351 | add pandas dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,136,287,000 | 1,594,217,716,000 | 1,594,217,715,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/351",
"html_url": "https://github.com/huggingface/datasets/pull/351",
"diff_url": "https://github.com/huggingface/datasets/pull/351.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/351.patch"
} | Create a dataset from serialized pandas dataframes.
Usage:
```python
from nlp import load_dataset
dset = load_dataset("pandas", data_files="df.pkl")["train"]
``` | https://api.github.com/repos/huggingface/datasets/issues/351/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/350 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/350/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/350/comments | https://api.github.com/repos/huggingface/datasets/issues/350/events | https://github.com/huggingface/datasets/pull/350 | 652,398,691 | MDExOlB1bGxSZXF1ZXN0NDQ1NDczODYz | 350 | add from_pandas and from_dict | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,134,233,000 | 1,594,217,673,000 | 1,594,217,672,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/350",
"html_url": "https://github.com/huggingface/datasets/pull/350",
"diff_url": "https://github.com/huggingface/datasets/pull/350.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/350.patch"
} | I added two new methods to the `Dataset` class:
- `from_pandas()` to create a dataset from a pandas dataframe
- `from_dict()` to create a dataset from a dictionary (keys = columns)
It uses the `pa.Table.from_pandas` and `pa.Table.from_pydict` funcitons to do so.
It is also possible to specify the features types via `features=...` if there are ambiguities (null/nan values), otherwise the arrow schema is infered from the data automatically by pyarrow.
One question that I have right now:
+ Should we also add a `save()` method that would write the dataset on the disk ? Right now if we create a `Dataset` using those two new methods, the data are kept in RAM. Then to reload it we can call the `from_file()` method. | https://api.github.com/repos/huggingface/datasets/issues/350/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/349 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/349/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/349/comments | https://api.github.com/repos/huggingface/datasets/issues/349/events | https://github.com/huggingface/datasets/pull/349 | 652,231,571 | MDExOlB1bGxSZXF1ZXN0NDQ1MzQwMTQ1 | 349 | Hyperpartisan news detection | {
"login": "ghomasHudson",
"id": 13795113,
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghomasHudson",
"html_url": "https://github.com/ghomasHudson",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thank you so much for working on this! This is awesome!\r\n\r\nHow much would it help you if we would remove the manual request?\r\n\r\nWe are naturally interested in getting some broad idea of how many people and who are using our dataset. But if you consider hosting the dataset yourself, I would rather remove this small barrier on our side (so that we then still get the download count from your library).",
"This is an interesting aspect indeed!\r\nDo you want to send me an email (see my homepage) and I'll invite you on our slack channel to talk about that?\r\n@ghomasHudson wanna reach out to me as well? I tried to find your email to invite you without success."
] | 1,594,119,997,000 | 1,594,154,847,000 | 1,594,133,831,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/349",
"html_url": "https://github.com/huggingface/datasets/pull/349",
"diff_url": "https://github.com/huggingface/datasets/pull/349.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/349.patch"
} | Adding the hyperpartisan news detection dataset from PAN. This contains news article text, labelled with whether they're hyper-partisan and why kinds of biases they display.
Implementation notes:
- As with many PAN tasks, the data is hosted on [Zenodo](https://zenodo.org/record/1489920) and must be requested before use. I've used the manual download stuff for this, although the dataset is provided under a Creative Commons Attribution 4.0 International License, so we could host a version if we wanted to?
- The 'bias' attribute doesn't exist for the 'byarticle' configuration. I've added an empty string to the class labels to deal with this. Is there a more standard value for empty data?
- Should we always subclass `nlp.BuilderConfig`?
| https://api.github.com/repos/huggingface/datasets/issues/349/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/348 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/348/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/348/comments | https://api.github.com/repos/huggingface/datasets/issues/348/events | https://github.com/huggingface/datasets/pull/348 | 652,158,308 | MDExOlB1bGxSZXF1ZXN0NDQ1MjgwNjk3 | 348 | Add OSCAR dataset | {
"login": "pjox",
"id": 635220,
"node_id": "MDQ6VXNlcjYzNTIyMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/635220?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pjox",
"html_url": "https://github.com/pjox",
"followers_url": "https://api.github.com/users/pjox/followers",
"following_url": "https://api.github.com/users/pjox/following{/other_user}",
"gists_url": "https://api.github.com/users/pjox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pjox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pjox/subscriptions",
"organizations_url": "https://api.github.com/users/pjox/orgs",
"repos_url": "https://api.github.com/users/pjox/repos",
"events_url": "https://api.github.com/users/pjox/events{/privacy}",
"received_events_url": "https://api.github.com/users/pjox/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@pjox I think the tests don't pass because you haven't provided any dummy data (`dummy_data.zip`).\r\n\r\n ",
"> @pjox I think the tests don't pass because you haven't provided any dummy data (`dummy_data.zip`).\r\n\r\nBut can I do the dummy data without running `python nlp-cli test datasets/<your-dataset-folder> --save_infos --all_configs` first? 🤔 ",
"You make a good point! Do you know how big is it uncompressed?",
"Between 7T and 9T I think.",
"Hi ! I've been busy but I plan to compute the missing metadata soon !\r\nLooking forward to be able to load a memory mapped version of OSCAR :) ",
"> Hi ! I've been busy but I plan to compute the missing metadata soon !\r\n> Looking forward to be able to load a memory mapped version of OSCAR :)\r\n\r\nAmazing! Thanks! 😄 ",
"Hi there, are there any plans to complete this issue soon? I'm planning to use this dataset on a project. Let me know if there's anything I can do to help to finish this 🤗 ",
"Yes it will be added soon :) \r\nRecently the OSCAR data files were moved to another host. We just need to update the script and compute the dataset_infos.json (it will probably take a few days).",
"@lhoestq I've seen in oscar.py that it isn't a dataset script with manual download way. Is that correct? \r\nSome time ago, @pjox had some troubles with his servers providing that dataset 'cause it's really huge. Providing it on an automatic download way seems to be a little bit dangerous for me 😄 ",
"Now thanks to @pjox 's help OSCAR is hosted on HF's S3, which is probably more robust that the previous servers :)\r\n\r\nAlso small update on my side:\r\nI launched the computation of the dataset_infos.json file, it will take a few days.",
"Now it seems to be a good plan for me 🤗 ",
"But is there a plan to provide the OSCAR's unshuffled version too?",
"The one we have on S3 is currently the unshuffled version",
"I've thought that you won't provide the unshuffled version 'cause this comment on oscar.py:\r\n\r\n`# TODO(oscar): Implement unshuffled OSCAR`\r\n\r\n",
"That TODO is normal, I haven't touched the python script in months (I haven't had the time, sorry), but I guess @lhoestq fixed the paths if he's already working on the metadata. In any case from now on, only the unshuffled versions of OSCAR will be distributed through the hf/datasets library as in any case it is the version most people use to train language models.\r\n\r\nIf for any reason, you need the shuffled version it will always be available on the [OSCAR website](https://oscar-corpus.com).\r\n\r\nAlso future versions of OSCAR will be unshuffled only.",
"Should we close this PR now that the other one was merged?",
"Sure.\r\nClosing since #1694 is merged",
"@lhoestq just a little detail, is the Oscar version that HF offers the same one that was available on INRIA? By that I mean, have you done any further filtering or removing of data inside it? Thanks a lot! ",
"Hello @jchwenger, this is exactly the same (unshuffled) version that's available at Inria. Sadly no further filtering is provided, but after the latest OSCAR audit (https://arxiv.org/abs/2103.12028) we're already working on future versions of OSCAR that will be \"filtered\" and that will be available on the OSCAR website and hopefully here as well.",
"@pjox brilliant, in my case I was hoping it would be unfiltered, good news!"
] | 1,594,113,727,000 | 1,620,079,628,000 | 1,612,865,959,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/348",
"html_url": "https://github.com/huggingface/datasets/pull/348",
"diff_url": "https://github.com/huggingface/datasets/pull/348.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/348.patch"
} | I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks! | https://api.github.com/repos/huggingface/datasets/issues/348/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/347 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/347/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/347/comments | https://api.github.com/repos/huggingface/datasets/issues/347/events | https://github.com/huggingface/datasets/issues/347 | 652,106,567 | MDU6SXNzdWU2NTIxMDY1Njc= | 347 | 'cp950' codec error from load_dataset('xtreme', 'tydiqa') | {
"login": "jerryIsHere",
"id": 50871412,
"node_id": "MDQ6VXNlcjUwODcxNDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jerryIsHere",
"html_url": "https://github.com/jerryIsHere",
"followers_url": "https://api.github.com/users/jerryIsHere/followers",
"following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}",
"gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions",
"organizations_url": "https://api.github.com/users/jerryIsHere/orgs",
"repos_url": "https://api.github.com/users/jerryIsHere/repos",
"events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}",
"received_events_url": "https://api.github.com/users/jerryIsHere/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"This is probably a Windows issue, we need to specify the encoding when `load_dataset()` reads the original CSV file.\r\nTry to find the `open()` statement called by `load_dataset()` and add an `encoding='utf-8'` parameter.\r\nSee issues #242 and #307 ",
"It should be in `xtreme.py:L755`:\r\n```python\r\n if self.config.name == \"tydiqa\" or self.config.name.startswith(\"MLQA\") or self.config.name == \"SQuAD\":\r\n with open(filepath) as f:\r\n data = json.load(f)\r\n```\r\n\r\nCould you try to add the encoding parameter:\r\n```python\r\nopen(filepath, encoding='utf-8')\r\n```",
"Hello @jerryIsHere :) Did it work ?\r\nIf so we may change the dataset script to force the utf-8 encoding",
"@lhoestq sorry for being that late, I found 4 copy of xtreme.py. I did the changes as what has been told to all of them.\r\nThe problem is not solved",
"Could you provide a better error message so that we can make sure it comes from the opening of the `tydiqa`'s json files ?\r\n",
"@lhoestq \r\nThe error message is same as before:\r\nException has occurred: UnicodeDecodeError\r\n'cp950' codec can't decode byte 0xe2 in position 111: illegal multibyte sequence\r\n File \"D:\\python\\test\\test.py\", line 3, in <module>\r\n dataset = load_dataset('xtreme', 'tydiqa')\r\n\r\n![image](https://user-images.githubusercontent.com/50871412/87748794-7c216880-c829-11ea-94f0-7caeacb4d865.png)\r\n\r\nI said that I found 4 copy of xtreme.py and add the 「, encoding='utf-8'」 parameter to the open() function\r\nthese python script was found under this directory\r\nC:\\Users\\USER\\AppData\\Local\\Programs\\Python\\Python37\\Lib\\site-packages\\nlp\\datasets\\xtreme\r\n",
"Hi there !\r\nI encountered the same issue with the IMDB dataset on windows. It threw an error about charmap not being able to decode a symbol during the first time I tried to download it. I checked on a remote linux machine I have, and it can't be reproduced.\r\nI added ```encoding='UTF-8'``` to both lines that have ```open``` in ```imdb.py``` (108 and 114) and it worked for me.\r\nThank you !",
"> Hi there !\r\n> I encountered the same issue with the IMDB dataset on windows. It threw an error about charmap not being able to decode a symbol during the first time I tried to download it. I checked on a remote linux machine I have, and it can't be reproduced.\r\n> I added `encoding='UTF-8'` to both lines that have `open` in `imdb.py` (108 and 114) and it worked for me.\r\n> Thank you !\r\n\r\nHello !\r\nGlad you managed to fix this issue on your side.\r\nDo you mind opening a PR for IMDB ?",
"> This is probably a Windows issue, we need to specify the encoding when `load_dataset()` reads the original CSV file.\r\n> Try to find the `open()` statement called by `load_dataset()` and add an `encoding='utf-8'` parameter.\r\n> See issues #242 and #307\r\n\r\nSorry for not responding for about a month.\r\nI have just found that it is necessary to change / add the environment variable as what was told in #242.\r\nEverything works after I add the new environment variable and restart my PC.\r\n\r\nI think the encoding issue for windows isn't limited to the open() function call specific to few dataset, but actually in the entire library, depends on the machine / os you use.",
"Since #481 we shouldn't have other issues with encodings as they need to be set to \"utf-8\" be default.\r\n\r\nClosing this one, but feel free to re-open if you gave other questions"
] | 1,594,109,663,000 | 1,599,490,305,000 | 1,599,490,305,000 | CONTRIBUTOR | null | null | ![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png)
I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps :
https://www.python.org/dev/peps/pep-0263/
I guess the error was triggered by the code " module = importlib.import_module(module_path)" at line 57 in the source code: nlp/src/nlp/load.py / (https://github.com/huggingface/nlp/blob/911d5596f9b500e39af8642fe3d1b891758999c7/src/nlp/load.py#L51)
Any ideas?
p.s. tried the same code on colab, that runs perfectly
| https://api.github.com/repos/huggingface/datasets/issues/347/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/346 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/346/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/346/comments | https://api.github.com/repos/huggingface/datasets/issues/346/events | https://github.com/huggingface/datasets/pull/346 | 652,044,151 | MDExOlB1bGxSZXF1ZXN0NDQ1MTg4MTUz | 346 | Add emotion dataset | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I've tried it and am getting the same error as you.\r\n\r\nYou could use the text files rather than the pickle:\r\n```\r\nhttps://www.dropbox.com/s/ikkqxfdbdec3fuj/test.txt\r\nhttps://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt\r\nhttps://www.dropbox.com/s/2mzialpsgf9k5l3/val.txt\r\n```\r\n\r\nThen you would get all 3 splits rather than just the train split.",
"Thanks a lot @ghomasHudson - silly me for not spotting that! \r\n\r\nI'll keep the PR open for now since I'm quite close to wrapping it up.",
"Hi @ghomasHudson your suggestion worked like a charm - the PR is now ready for review 😎 ",
"Hello, I probably have a silly question but the labels of the emotion dataset are in the form of numbers and not string, so I can not use the function classification_report because it mixes numbers and string (prediction). How can I access the label in the form of a string and not a number?\r\nThank you in advance.",
"Hi @juliette-sch! Yes, I believe that having the labels as integers is now the default for many classification datasets. You can access the string label via the `ClassLabel.int2str` function ([docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=int2str#datasets.ClassLabel.int2str)), so you could add a new column to the dataset as follows:\r\n\r\n```python\r\nfrom datasets import load_dataset \r\n\r\nemotions = load_dataset(\"emotion\")\r\n\r\ndef label_int2str(row):\r\n return {\"label_name\": emotions[\"train\"].features[\"label\"].int2str(row[\"label\"])}\r\n\r\n# adds a new column called `label_name`\r\nemotions = emotions.map(label_int2str)\r\n```",
"Great, thank you very much @lewtun !"
] | 1,594,103,741,000 | 1,619,162,023,000 | 1,594,651,178,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/346",
"html_url": "https://github.com/huggingface/datasets/pull/346",
"diff_url": "https://github.com/huggingface/datasets/pull/346.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/346.patch"
} | Hello 🤗 team!
I am trying to add an emotion classification dataset ([link](https://github.com/dair-ai/emotion_dataset)) to `nlp` but I am a bit stuck about what I should do when the URL for the dataset is not a ZIP file, but just a pickled `pandas.DataFrame` (see [here](https://www.dropbox.com/s/607ptdakxuh5i4s/merged_training.pkl)).
With the current implementation, running
```bash
python nlp-cli test datasets/emotion --save_infos --all_configs
```
throws a `_pickle.UnpicklingError: invalid load key, '<'.` error (full stack trace below). The strange thing is that the path to the file does not carry the `.pkl` extension and instead appears to be some md5 hash (see the `FILE PATH` print statement in the stack trace).
Note: I have checked that the `merged_training.pkl` file is not corrupted when I download it with `wget`.
Any pointers on what I'm doing wrong would be greatly appreciated!
**Stack trace**
```
INFO:nlp.load:Checking datasets/emotion/emotion.py for additional imports.
INFO:filelock:Lock 140330435928512 acquired on datasets/emotion/emotion.py.lock
INFO:nlp.load:Found main folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion
INFO:nlp.load:Creating specific version folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b
INFO:nlp.load:Copying script file from datasets/emotion/emotion.py to /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py
INFO:nlp.load:Couldn't find dataset infos file at datasets/emotion/dataset_infos.json
INFO:nlp.load:Creating metadata file for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.json
INFO:filelock:Lock 140330435928512 released on datasets/emotion/emotion.py.lock
INFO:nlp.builder:Generating dataset emotion (/Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0)
INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source
Downloading and preparing dataset emotion/emotion (download: Unknown size, generated: Unknown size, total: Unknown size) to /Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0...
INFO:nlp.builder:Generating split train
0 examples [00:00, ? examples/s]FILE PATH /Users/lewtun/.cache/huggingface/datasets/3615dcb52b7ba052ef63e1571894c4b67e8e12a6ab1ef2f756ec3c380bf48490
Traceback (most recent call last):
File "nlp-cli", line 37, in <module>
service.run()
File "/Users/lewtun/git/nlp/src/nlp/commands/test.py", line 83, in run
builder.download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 431, in download_and_prepare
self._download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 483, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 664, in _prepare_split
for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
File "/Users/lewtun/miniconda3/envs/nlp/lib/python3.8/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py", line 87, in _generate_examples
data = pickle.load(f)
_pickle.UnpicklingError: invalid load key, '<'.
``` | https://api.github.com/repos/huggingface/datasets/issues/346/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/345 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/345/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/345/comments | https://api.github.com/repos/huggingface/datasets/issues/345/events | https://github.com/huggingface/datasets/issues/345 | 651,761,201 | MDU6SXNzdWU2NTE3NjEyMDE= | 345 | Supporting documents in ELI5 | {
"login": "saverymax",
"id": 29262273,
"node_id": "MDQ6VXNlcjI5MjYyMjcz",
"avatar_url": "https://avatars.githubusercontent.com/u/29262273?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saverymax",
"html_url": "https://github.com/saverymax",
"followers_url": "https://api.github.com/users/saverymax/followers",
"following_url": "https://api.github.com/users/saverymax/following{/other_user}",
"gists_url": "https://api.github.com/users/saverymax/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saverymax/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saverymax/subscriptions",
"organizations_url": "https://api.github.com/users/saverymax/orgs",
"repos_url": "https://api.github.com/users/saverymax/repos",
"events_url": "https://api.github.com/users/saverymax/events{/privacy}",
"received_events_url": "https://api.github.com/users/saverymax/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @saverymax ! For licensing reasons, the original team was unable to release pre-processed CommonCrawl documents. Instead, they provided a script to re-create them from a CommonCrawl dump, but it unfortunately requires access to a medium-large size cluster:\r\nhttps://github.com/facebookresearch/ELI5#downloading-support-documents-from-the-commoncrawl\r\n\r\nIn order to make the task accessible to people who may not have access to this kind of infrastructure, we suggest to use Wikipedia as a knowledge source rather than the full CommonCrawl. The following blog post shows how you can create Wikipedia support documents and get a performance that is on par with a system that uses CommonCrawl pages.\r\nhttps://yjernite.github.io/lfqa.html#task_description\r\n\r\nHope that helps, using ElasticSearch to index Wiki40b and create the documents should take about 4 hours. Let us know if you have any trouble with the blog post though!",
"Hi, thanks for the quick response. The blog post is quite an interesting working example, thanks for sharing it.\r\nTwo follow-up points/questions about my original question:\r\n\r\n1. Yes, I read that the facebook team could not share the CommonCrawl b/c of licensing reasons. They state \"No, we are not allowed to host processed Reddit or CommonCrawl data,\" which indicates they could also not share the Reddit data for licensing reasons. But it seems that HuggingFace is able to share the Reddit data, so why not a subset of CommonCrawl?\r\n\r\n2. Thanks for the suggestion about ElasticSearch and Wiki40b. This is good to know about performance. I definitely could do the indexing and querying myself. What I like about the ELI5 dataset though, at least what is suggested by the paper, is that to create the dataset they had already selected the top 100 web sources and made a single support document from those. Though it doesn't appear to be too sophisticated an approach, having a single support document pre-computed (without having to run the facebook code or a replacement with another dataset) is super useful for my work, especially since I'm not working on developing the latest and greatest retrieval model. Of course, I don't expect HF NLP datasets to be perfectly tailored to my use-case. I know there is overhead to any project, I'm just illustrating a use-case of ELI5 which is not possible with the data provided as-is. If it's for licensing reasons, that is perfectly acceptable a reason, and I appreciate your response."
] | 1,594,062,853,000 | 1,603,813,125,000 | 1,603,813,125,000 | NONE | null | null | I was attempting to use the ELI5 dataset, when I realized that huggingface does not provide the supporting documents (the source documents from the common crawl). Without the supporting documents, this makes the dataset about as useful for my project as a block of cheese, or some other more apt metaphor. According to facebook, the entire document collection is quite large. However, it would still be helpful to at least include a subset of the supporting documents i.e., having some data is better than having a block of cheese, in my case at least.
If you choose not to include them, it would be helpful to have documentation mentioning this specifically. It is especially confusing because the hf nlp ELI5 dataset has the key `'document'` but there are no documents to be found :( | https://api.github.com/repos/huggingface/datasets/issues/345/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/344 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/344/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/344/comments | https://api.github.com/repos/huggingface/datasets/issues/344/events | https://github.com/huggingface/datasets/pull/344 | 651,495,246 | MDExOlB1bGxSZXF1ZXN0NDQ0NzQwMTIw | 344 | Search qa | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Could you rebase from master just to make sure we won't break anything for `fever` pls @mariamabarham ?"
] | 1,594,038,196,000 | 1,594,889,896,000 | 1,594,889,896,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/344",
"html_url": "https://github.com/huggingface/datasets/pull/344",
"diff_url": "https://github.com/huggingface/datasets/pull/344.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/344.patch"
} | This PR adds the Search QA dataset used in **SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine**. The dataset has the following config name:
- raw_jeopardy: raw data
- train_test_val: which is the splitted version
#336 | https://api.github.com/repos/huggingface/datasets/issues/344/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/343 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/343/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/343/comments | https://api.github.com/repos/huggingface/datasets/issues/343/events | https://github.com/huggingface/datasets/pull/343 | 651,419,630 | MDExOlB1bGxSZXF1ZXN0NDQ0Njc4NDEw | 343 | Fix nested tensorflow format | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,030,425,000 | 1,594,041,112,000 | 1,594,041,111,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/343",
"html_url": "https://github.com/huggingface/datasets/pull/343",
"diff_url": "https://github.com/huggingface/datasets/pull/343.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/343.patch"
} | In #339 and #337 we are thinking about adding a way to export datasets to tfrecords.
However I noticed that it was not possible to do `dset.set_format("tensorflow")` on datasets with nested features like `squad`. I fixed that using a nested map operations to convert features to `tf.ragged.constant`.
I also added tests on the `set_format` function. | https://api.github.com/repos/huggingface/datasets/issues/343/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/342 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/342/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/342/comments | https://api.github.com/repos/huggingface/datasets/issues/342/events | https://github.com/huggingface/datasets/issues/342 | 651,333,194 | MDU6SXNzdWU2NTEzMzMxOTQ= | 342 | Features should be updated when `map()` changes schema | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"`dataset.column_names` are being updated but `dataset.features` aren't indeed..."
] | 1,594,022,603,000 | 1,595,499,316,000 | 1,595,499,316,000 | MEMBER | null | null | `dataset.map()` can change the schema and column names.
We should update the features in this case (with what is possible to infer). | https://api.github.com/repos/huggingface/datasets/issues/342/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/341 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/341/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/341/comments | https://api.github.com/repos/huggingface/datasets/issues/341/events | https://github.com/huggingface/datasets/pull/341 | 650,611,969 | MDExOlB1bGxSZXF1ZXN0NDQ0MDcwMjEx | 341 | add fever dataset | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,593,784,387,000 | 1,594,040,628,000 | 1,594,040,627,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/341",
"html_url": "https://github.com/huggingface/datasets/pull/341",
"diff_url": "https://github.com/huggingface/datasets/pull/341.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/341.patch"
} | This PR add the FEVER dataset https://fever.ai/ used in with the paper: FEVER: a large-scale dataset for Fact Extraction and VERification (https://arxiv.org/pdf/1803.05355.pdf).
#336 | https://api.github.com/repos/huggingface/datasets/issues/341/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/340 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/340/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/340/comments | https://api.github.com/repos/huggingface/datasets/issues/340/events | https://github.com/huggingface/datasets/pull/340 | 650,533,920 | MDExOlB1bGxSZXF1ZXN0NDQ0MDA2Nzcy | 340 | Update cfq.py | {
"login": "brainshawn",
"id": 4437290,
"node_id": "MDQ6VXNlcjQ0MzcyOTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4437290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brainshawn",
"html_url": "https://github.com/brainshawn",
"followers_url": "https://api.github.com/users/brainshawn/followers",
"following_url": "https://api.github.com/users/brainshawn/following{/other_user}",
"gists_url": "https://api.github.com/users/brainshawn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brainshawn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brainshawn/subscriptions",
"organizations_url": "https://api.github.com/users/brainshawn/orgs",
"repos_url": "https://api.github.com/users/brainshawn/repos",
"events_url": "https://api.github.com/users/brainshawn/events{/privacy}",
"received_events_url": "https://api.github.com/users/brainshawn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks @brainshawn for this update"
] | 1,593,775,399,000 | 1,593,779,630,000 | 1,593,779,630,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/340",
"html_url": "https://github.com/huggingface/datasets/pull/340",
"diff_url": "https://github.com/huggingface/datasets/pull/340.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/340.patch"
} | Make the dataset name consistent with in the paper: Compositional Freebase Question => Compositional Freebase Questions. | https://api.github.com/repos/huggingface/datasets/issues/340/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/339 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/339/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/339/comments | https://api.github.com/repos/huggingface/datasets/issues/339/events | https://github.com/huggingface/datasets/pull/339 | 650,156,468 | MDExOlB1bGxSZXF1ZXN0NDQzNzAyNTcw | 339 | Add dataset.export() to TFRecords | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Really cool @jarednielsen !\r\nDo you think we can make it work with dataset with nested features like `squad` ?\r\n\r\nI just did a PR to fix `.set_format` for datasets with nested features, but as soon as it's merged we could try to make the conversion work on a dataset like `squad`.",
"For datasets with nested features we have two aspects to take into account:\r\n1) There can be nested dict of features. What is done in tensorflow_datasets to make things work is to flatten the dictionaries to end up with one single dictionary. A dict like `{\"column1\": {\"subfeature\": ...}}` is converted to `{\"column1/subfeature\":...}`\r\n2) There can be ragged tensors, i.e. lists of objects with non-fixed shapes. For example in squad there are often multiple possible answers per question. What is done in tensorflow_datasets to make things work is to concatenate everything and add ragged attributes (cf serialization code [here](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/example_serializer.py))",
"Note that we have `flatten` method in `ArrowDataset`",
"I added support for nested dictionaries. A few more design decisions popped up:\r\n\r\n_Should we serialize from NumPy arrays or from tf.Tensors?_\r\n- The [tfds example serializer](url) works from NumPy arrays.\r\n- Calling `dset.set_format(\"tensorflow\")` makes `__getitem__` return a tf.Tensor. So serializing from NumPy arrays would mean calling `dset.export()` before setting the format, which is confusing.\r\n- NumPy arrays can be serialized as their underlying datatype (int, float), while tf.Tensors must be converted to strings before serialization. This adds another step when serializing and deserializing, and removes the static-typing advantages of the TFRecord format.\r\n\r\nI think we should export directly from the underlying NumPy arrays into TFRecords, rather than using an intermediate step of tf.Tensor.\r\n\r\n_Should we serialize lists of dictionaries?_\r\n- The test_format_nested() test creates a list of dictionaries: https://github.com/huggingface/nlp/blob/911d5596f9b500e39af8642fe3d1b891758999c7/tests/test_arrow_dataset.py#L278-L288\r\n- This is difficult to serialize effectively, and I'm not aware of any dataset that has this format. SQuAD has a dictionary of lists, such as the `answers` key. Is this necessary?",
"Thanks @thomwolf, used dset.flatten() to simplify. That handles the case of nested dictionaries, and then lists can be read into a tf.io.RaggedFeature in the case of something like squad answers.",
"@jarednielsen I just checked and indeed we don't have lists of dicts, we can just focus on the squad format as a reference then :) I'll change the test to remove this format that's not supposed to happen",
"Actually I realised that `flatten` also handles nested things like pyarrow's list<struct> so it's fine :D \r\nThis is so cool !\r\n\r\nCould you also add a test with a squad-like dataset ? As soon as we have that I think we'll be good to merge @jarednielsen :)\r\nGood job !",
"Great, done! I think this could be a great canonical way to generate a dataset.",
"I tried to match the format of Dataset.sort() and Dataset.shuffle() with the docstring. What difference are you referring to specifically?",
"Oh my bad they're fine actually (I was thinking of the backticks that we don't use in the docstrings of the transformers repo for argument names)",
"One final thing: now that we have a brand new documentation, could you just add `export` to the list of documented methods in [docs/source/package_reference/main_classes.rst](https://github.com/huggingface/nlp/blob/master/docs/source/package_reference/main_classes.rst) (so that it will appear in the docs [here](https://huggingface.co/nlp/package_reference/main_classes.html)) ?\r\n",
"Done",
"Cool thanks :)",
"Since #403 (it just got merged), we return python objects and not numpy arrays anymore (unless format=\"numpy\" is specified).\r\nDo you think it can break the export method ? Could you try to rebase from master to run the CI to make sure it's fine ?",
"Good catch. I fixed it up so it works with the new format. By the way, when dset.format == \"numpy\", it now returns single items (like `0`) as a 0-dimensional NumPy array. Not sure if that is desired.",
"I played a little bit with the code and it works quite well :)\r\n\r\nI found two cases for which it doesn't work though:\r\n- if the features dict depth is > 2 (ex: wikisql), because `flatten` only flattens the first level of nesting (it can be fixed by calling `flatten` several times in a row, see [here](https://issues.apache.org/jira/browse/ARROW-4090))\r\n- Or if there are 2d features (ex: wikisql, `table.rows` is a sequence of sequences of strings), because tf.train.Features only support 1-d lists. That's why tensorflow-datasets flattens these 2-d features to 1-d and adds ragged features that are the shapes of the arrays, so that they can be reconstructed.\r\n\r\nI think we can ignore the 2d stuff right now (some work is being done in #363 ), but I'd like to see the `flatten` issue fixed soon\r\n",
"That seems like a bug in `pyarrow`, or at least in `flatten()`. Looks like it should be a separate PR.",
"I made `.flatten` work on our side (it calls pyarrow's flatten several times until it's really flat).\r\n\r\nThe only datasets that won't work are those with lists of lists of features, which is a rare case. Hopefully we can make this work with the multi-dimensional arrays changes we're also doing.\r\n\r\nI think we can merge now :) cc @thomwolf "
] | 1,593,717,987,000 | 1,595,409,372,000 | 1,595,409,372,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/339",
"html_url": "https://github.com/huggingface/datasets/pull/339",
"diff_url": "https://github.com/huggingface/datasets/pull/339.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/339.patch"
} | Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent? | https://api.github.com/repos/huggingface/datasets/issues/339/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/338 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/338/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/338/comments | https://api.github.com/repos/huggingface/datasets/issues/338/events | https://github.com/huggingface/datasets/pull/338 | 650,057,253 | MDExOlB1bGxSZXF1ZXN0NDQzNjIxMTEx | 338 | Run `make style` | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,593,706,787,000 | 1,593,712,990,000 | 1,593,712,990,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/338",
"html_url": "https://github.com/huggingface/datasets/pull/338",
"diff_url": "https://github.com/huggingface/datasets/pull/338.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/338.patch"
} | These files get changed when I run `make style` on an unrelated PR. Upstreaming these changes so development on a different branch can be easier. | https://api.github.com/repos/huggingface/datasets/issues/338/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/337 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/337/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/337/comments | https://api.github.com/repos/huggingface/datasets/issues/337/events | https://github.com/huggingface/datasets/issues/337 | 650,035,887 | MDU6SXNzdWU2NTAwMzU4ODc= | 337 | [Feature request] Export Arrow dataset to TFRecords | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,593,704,832,000 | 1,595,409,372,000 | 1,595,409,372,000 | CONTRIBUTOR | null | null | The TFRecord generation process is error-prone and requires complex separate Python scripts to download and preprocess the data. I propose to combine the user-friendly features of `nlp` with the speed and efficiency of TFRecords. Sample API:
```python
# use these existing methods
ds = load_dataset("wikitext", "wikitext-2-raw-v1", split="train")
ds = ds.map(lambda ex: tokenizer(ex))
ds.set_format("tensorflow", columns=["input_ids", "token_type_ids", "attention_mask"])
# then add this method
ds.export(folder="/my/tfrecords", prefix="myrecord", num_shards=8, format="tfrecord")
```
which would create files like so:
```bash
/my/tfrecords/myrecord_1.tfrecord
/my/tfrecords/myrecord_2.tfrecord
...
```
I would be happy to contribute this method. We could use a similar approach for PyTorch. Thoughts? | https://api.github.com/repos/huggingface/datasets/issues/337/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/336 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/336/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/336/comments | https://api.github.com/repos/huggingface/datasets/issues/336/events | https://github.com/huggingface/datasets/issues/336 | 649,914,203 | MDU6SXNzdWU2NDk5MTQyMDM= | 336 | [Dataset requests] New datasets for Open Question Answering | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892884,
"node_id": "MDU6TGFiZWwxOTM1ODkyODg0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted",
"name": "help wanted",
"color": "008672",
"default": true,
"description": "Extra attention is needed"
},
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,593,694,983,000 | 1,594,890,262,000 | 1,594,890,262,000 | MEMBER | null | null | We are still a few datasets missing for Open-Question Answering which is currently a field in strong development.
Namely, it would be really nice to add:
- WebQuestions (Berant et al., 2013) [done]
- CuratedTrec (Baudis et al. 2015) [not open-source]
- MS-MARCO (NGuyen et al. 2016) [done]
- SearchQA (Dunn et al. 2017) [done]
- FEVER (Thorne et al. 2018) - [ done]
All these datasets are cited in http://arxiv.org/abs/2005.11401 | https://api.github.com/repos/huggingface/datasets/issues/336/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/335 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/335/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/335/comments | https://api.github.com/repos/huggingface/datasets/issues/335/events | https://github.com/huggingface/datasets/pull/335 | 649,765,179 | MDExOlB1bGxSZXF1ZXN0NDQzMzgwMjI1 | 335 | BioMRC Dataset presented in BioNLP 2020 ACL Workshop | {
"login": "PetrosStav",
"id": 15162021,
"node_id": "MDQ6VXNlcjE1MTYyMDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/15162021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PetrosStav",
"html_url": "https://github.com/PetrosStav",
"followers_url": "https://api.github.com/users/PetrosStav/followers",
"following_url": "https://api.github.com/users/PetrosStav/following{/other_user}",
"gists_url": "https://api.github.com/users/PetrosStav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PetrosStav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PetrosStav/subscriptions",
"organizations_url": "https://api.github.com/users/PetrosStav/orgs",
"repos_url": "https://api.github.com/users/PetrosStav/repos",
"events_url": "https://api.github.com/users/PetrosStav/events{/privacy}",
"received_events_url": "https://api.github.com/users/PetrosStav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I fixed the issues that you pointed out, re-run all the test and pushed the fixed code :-)",
"```\r\n=================================== FAILURES ===================================\r\n___________________ AWSDatasetTest.test_load_dataset_pandas ____________________\r\n\r\nself = <tests.test_dataset_common.AWSDatasetTest testMethod=test_load_dataset_pandas>\r\ndataset_name = 'pandas'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs)\r\n\r\ntests/test_dataset_common.py:231: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:125: in check_load_dataset\r\n dl_manager=mock_dl_manager, download_mode=GenerateMode.FORCE_REDOWNLOAD, ignore_verifications=True\r\n../.local/lib/python3.6/site-packages/nlp/builder.py:432: in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n../.local/lib/python3.6/site-packages/nlp/builder.py:466: in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = <nlp.datasets.pandas.91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926.pandas.Pandas object at 0x7f3b84f655c0>\r\ndl_manager = <nlp.utils.mock_download_manager.MockDownloadManager object at 0x7f3b84f3d320>\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\" We handle string, list and dicts in datafiles\r\n \"\"\"\r\n if isinstance(self.config.data_files, (str, list, tuple)):\r\n files = self.config.data_files\r\n if isinstance(files, str):\r\n files = [files]\r\n return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={\"files\": files})]\r\n splits = []\r\n for split_name in [nlp.Split.TRAIN, nlp.Split.VALIDATION, nlp.Split.TEST]:\r\n> if split_name in self.config.data_files:\r\nE TypeError: argument of type 'NoneType' is not iterable\r\n\r\n../.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926/pandas.py:23: TypeError\r\n------------------------------ Captured log call -------------------------------\r\nINFO filelock:filelock.py:274 Lock 139893169180856 acquired on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nINFO nlp.utils.file_utils:file_utils.py:386 https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py not found in cache or force_download set to True, downloading to /home/circleci/.cache/huggingface/datasets/tmpwmbk8e8d\r\nINFO nlp.utils.file_utils:file_utils.py:391 storing https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py in cache at /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py\r\nINFO nlp.utils.file_utils:file_utils.py:394 creating metadata file for /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py\r\nINFO filelock:filelock.py:318 Lock 139893169180856 released on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nINFO nlp.load:load.py:157 Checking /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py for additional imports.\r\nINFO filelock:filelock.py:274 Lock 139893610536912 acquired on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nINFO nlp.load:load.py:320 Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas\r\nINFO nlp.load:load.py:333 Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926\r\nINFO nlp.load:load.py:346 Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py to /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926/pandas.py\r\nINFO nlp.load:load.py:354 Couldn't find dataset infos file at https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/dataset_infos.json\r\nINFO nlp.load:load.py:371 Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926/pandas.json\r\nINFO filelock:filelock.py:318 Lock 139893610536912 released on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nINFO filelock:filelock.py:274 Lock 139893610533608 acquired on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nINFO nlp.utils.file_utils:file_utils.py:386 https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py not found in cache or force_download set to True, downloading to /home/circleci/.cache/huggingface/datasets/tmp00hpyxrs\r\nINFO nlp.utils.file_utils:file_utils.py:391 storing https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py in cache at /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py\r\nINFO nlp.utils.file_utils:file_utils.py:394 creating metadata file for /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py\r\nINFO filelock:filelock.py:318 Lock 139893610533608 released on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nINFO nlp.load:load.py:157 Checking /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py for additional imports.\r\nINFO filelock:filelock.py:274 Lock 139893610371224 acquired on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nINFO nlp.load:load.py:320 Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas\r\nINFO nlp.load:load.py:333 Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926\r\nINFO nlp.load:load.py:346 Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py to /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926/pandas.py\r\nINFO nlp.load:load.py:354 Couldn't find dataset infos file at https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/dataset_infos.json\r\nINFO nlp.load:load.py:371 Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926/pandas.json\r\nINFO filelock:filelock.py:318 Lock 139893610371224 released on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nWARNING nlp.builder:builder.py:215 Using custom data configuration default\r\nINFO nlp.builder:builder.py:349 Generating dataset pandas (/tmp/tmp296h8eeg/pandas/default/0.0.0)\r\nINFO nlp.builder:builder.py:397 Dataset not on Hf google storage. Downloading and preparing it from source\r\n____________________ AWSDatasetTest.test_load_dataset_text _____________________\r\n\r\nself = <tests.test_dataset_common.AWSDatasetTest testMethod=test_load_dataset_text>\r\ndataset_name = 'text'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs)\r\n\r\ntests/test_dataset_common.py:231: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:125: in check_load_dataset\r\n dl_manager=mock_dl_manager, download_mode=GenerateMode.FORCE_REDOWNLOAD, ignore_verifications=True\r\n../.local/lib/python3.6/site-packages/nlp/builder.py:432: in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n../.local/lib/python3.6/site-packages/nlp/builder.py:466: in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = <nlp.datasets.text.bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b.text.Text object at 0x7f3b6a111550>\r\ndl_manager = <nlp.utils.mock_download_manager.MockDownloadManager object at 0x7f3b85582908>\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\" The `datafiles` kwarg in load_dataset() can be a str, List[str], Dict[str,str], or Dict[str,List[str]].\r\n \r\n If str or List[str], then the dataset returns only the 'train' split.\r\n If dict, then keys should be from the `nlp.Split` enum.\r\n \"\"\"\r\n if isinstance(self.config.data_files, (str, list, tuple)):\r\n # Handle case with only one split\r\n files = self.config.data_files\r\n if isinstance(files, str):\r\n files = [files]\r\n return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={\"files\": files})]\r\n else:\r\n # Handle case with several splits and a dict mapping\r\n splits = []\r\n for split_name in [nlp.Split.TRAIN, nlp.Split.VALIDATION, nlp.Split.TEST]:\r\n> if split_name in self.config.data_files:\r\nE TypeError: argument of type 'NoneType' is not iterable\r\n\r\n../.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.py:24: TypeError\r\n------------------------------ Captured log call -------------------------------\r\nINFO filelock:filelock.py:274 Lock 139893159303656 acquired on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nINFO nlp.utils.file_utils:file_utils.py:386 https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py not found in cache or force_download set to True, downloading to /home/circleci/.cache/huggingface/datasets/tmpk63omy4v\r\nINFO nlp.utils.file_utils:file_utils.py:391 storing https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py in cache at /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py\r\nINFO nlp.utils.file_utils:file_utils.py:394 creating metadata file for /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py\r\nINFO filelock:filelock.py:318 Lock 139893159303656 released on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nINFO nlp.load:load.py:157 Checking /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py for additional imports.\r\nINFO filelock:filelock.py:274 Lock 139893159171352 acquired on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nINFO nlp.load:load.py:320 Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text\r\nINFO nlp.load:load.py:333 Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b\r\nINFO nlp.load:load.py:346 Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py to /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.py\r\nINFO nlp.load:load.py:354 Couldn't find dataset infos file at https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/dataset_infos.json\r\nINFO nlp.load:load.py:371 Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.json\r\nINFO filelock:filelock.py:318 Lock 139893159171352 released on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nINFO filelock:filelock.py:274 Lock 139893618479176 acquired on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nINFO nlp.utils.file_utils:file_utils.py:386 https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py not found in cache or force_download set to True, downloading to /home/circleci/.cache/huggingface/datasets/tmpkeykru_f\r\nINFO nlp.utils.file_utils:file_utils.py:391 storing https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py in cache at /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py\r\nINFO nlp.utils.file_utils:file_utils.py:394 creating metadata file for /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py\r\nINFO filelock:filelock.py:318 Lock 139893618479176 released on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nINFO nlp.load:load.py:157 Checking /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py for additional imports.\r\nINFO filelock:filelock.py:274 Lock 139893618423848 acquired on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nINFO nlp.load:load.py:320 Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text\r\nINFO nlp.load:load.py:333 Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b\r\nINFO nlp.load:load.py:346 Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py to /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.py\r\nINFO nlp.load:load.py:354 Couldn't find dataset infos file at https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/dataset_infos.json\r\nINFO nlp.load:load.py:371 Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.json\r\nINFO filelock:filelock.py:318 Lock 139893618423848 released on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nWARNING nlp.builder:builder.py:215 Using custom data configuration default\r\nINFO nlp.builder:builder.py:349 Generating dataset text (/tmp/tmpbu67mvue/text/default/0.0.0)\r\nINFO nlp.builder:builder.py:397 Dataset not on Hf google storage. Downloading and preparing it from source\r\n=============================== warnings summary ===============================\r\n/home/circleci/.local/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py:15\r\n /home/circleci/.local/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\r\n import imp\r\n\r\ntests/test_dataset_common.py::LocalDatasetTest::test_builder_class_tydiqa\r\n /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/tydiqa/42d88245bde7c0db6c0d48c822dcaa26c7299e0b40cace7e8d6a9e3628135125/tydiqa.py:85: DeprecationWarning: invalid escape sequence \\G\r\n \"\"\"\r\n\r\ntests/test_dataset_common.py::AWSDatasetTest::test_builder_class_mwsc\r\n /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/mwsc/53c0daac11b6794ff62b52a3a46c4f9da1bef68fd664a2f97b8918917aead715/mwsc.py:70: DeprecationWarning: invalid escape sequence \\[\r\n pattern = \"\\[.*\\]\"\r\n\r\ntests/test_dataset_common.py::AWSDatasetTest::test_builder_class_squadshifts\r\n /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/squadshifts/15536d7296a785325b99f6d84dfdceafa427419dd6caad110eabb5e5b4156cc2/squadshifts.py:47: DeprecationWarning: invalid escape sequence \\ \r\n \"\"\"\r\n\r\n-- Docs: https://docs.pytest.org/en/latest/warnings.html\r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_pandas\r\nFAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_text\r\n===== 2 failed, 934 passed, 516 skipped, 4 warnings in 1562.46s (0:26:02) ======\r\n\r\nExited with code exit status 1\r\nCircleCI received exit code 1\r\n```\r\nI get this failed test on CircleCI , but all the tests that I run locally where successful. The error also seems not to have any, obvious at least, connection with my code.\r\n\r\nAny suggestions? Thanks! :-) "
] | 1,593,680,621,000 | 1,594,800,127,000 | 1,594,800,127,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/335",
"html_url": "https://github.com/huggingface/datasets/pull/335",
"diff_url": "https://github.com/huggingface/datasets/pull/335.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/335.patch"
} | https://api.github.com/repos/huggingface/datasets/issues/335/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/334 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/334/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/334/comments | https://api.github.com/repos/huggingface/datasets/issues/334/events | https://github.com/huggingface/datasets/pull/334 | 649,661,791 | MDExOlB1bGxSZXF1ZXN0NDQzMjk1NjQ0 | 334 | Add dataset.shard() method | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Great, done!"
] | 1,593,669,919,000 | 1,594,038,936,000 | 1,594,038,936,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/334",
"html_url": "https://github.com/huggingface/datasets/pull/334",
"diff_url": "https://github.com/huggingface/datasets/pull/334.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/334.patch"
} | Fixes https://github.com/huggingface/nlp/issues/312 | https://api.github.com/repos/huggingface/datasets/issues/334/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/333 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/333/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/333/comments | https://api.github.com/repos/huggingface/datasets/issues/333/events | https://github.com/huggingface/datasets/pull/333 | 649,236,516 | MDExOlB1bGxSZXF1ZXN0NDQyOTE1NDQ0 | 333 | fix variable name typo | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Good catch :)\r\nI think there is another occurence that needs to be fixed in the second gist (line 4924 of the notebook file):\r\n```python\r\nbleu = nlp.load_metric(...)\r\n```",
"Was fixed in e16f79b5f7fc12a6a30c777722be46897a272e6f\r\nClosing it."
] | 1,593,630,830,000 | 1,595,605,411,000 | 1,595,579,536,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/333",
"html_url": "https://github.com/huggingface/datasets/pull/333",
"diff_url": "https://github.com/huggingface/datasets/pull/333.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/333.patch"
} | https://api.github.com/repos/huggingface/datasets/issues/333/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/332 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/332/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/332/comments | https://api.github.com/repos/huggingface/datasets/issues/332/events | https://github.com/huggingface/datasets/pull/332 | 649,140,135 | MDExOlB1bGxSZXF1ZXN0NDQyODMwMzMz | 332 | Add wiki_dpr | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The two configurations don't have the same sizes, I may change that so that they both have 21015300 examples for convenience, even though it's supposed to have 21015324 examples in total.\r\n\r\nOne configuration only has 21015300 examples because it seems that the embeddings of the last 24 examples are missing.",
"It's ok to merge now imo. I'll make another PR if we find a way to have the missing embeddings"
] | 1,593,623,520,000 | 1,594,038,077,000 | 1,594,038,076,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/332",
"html_url": "https://github.com/huggingface/datasets/pull/332",
"diff_url": "https://github.com/huggingface/datasets/pull/332.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/332.patch"
} | Presented in the [Dense Passage Retrieval paper](https://arxiv.org/pdf/2004.04906.pdf), this dataset consists in 21M passages from the english wikipedia along with their 768-dim embeddings computed using DPR's context encoder.
Note on the implementation:
- There are two configs: with and without the embeddings (73GB vs 14GB)
- I used a non-fixed-size sequence of floats to describe the feature format of the embeddings. I wanted to use fixed-size sequences but I had issues with reading the arrow file afterwards (for example `dataset[0]` was crashing)
- I added the case for lists of urls as input of the download_manager | https://api.github.com/repos/huggingface/datasets/issues/332/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/331 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/331/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/331/comments | https://api.github.com/repos/huggingface/datasets/issues/331/events | https://github.com/huggingface/datasets/issues/331 | 648,533,199 | MDU6SXNzdWU2NDg1MzMxOTk= | 331 | Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError` | {
"login": "jxmorris12",
"id": 13238952,
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jxmorris12",
"html_url": "https://github.com/jxmorris12",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"I couldn't reproduce on my side.\r\nIt looks like you were not able to generate all the examples, and you have the problem for each split train-test-validation.\r\nCould you try to enable logging, try again and send the logs ?\r\n```python\r\nimport logging\r\nlogging.basicConfig(level=logging.INFO)\r\n```",
"here's the log\r\n```\r\n>>> import nlp\r\nimport logging\r\nlogging.basicConfig(level=logging.INFO)\r\nnlp.load_dataset('cnn_dailymail', '3.0.0')\r\n>>> import logging\r\n>>> logging.basicConfig(level=logging.INFO)\r\n>>> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\nINFO:nlp.load:Checking /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py for additional imports.\r\nINFO:filelock:Lock 140443095301136 acquired on /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\nINFO:nlp.load:Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail\r\nINFO:nlp.load:Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\nINFO:nlp.load:Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py to /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/cnn_dailymail.py\r\nINFO:nlp.load:Updating dataset infos file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/dataset_infos.json to /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/dataset_infos.json\r\nINFO:nlp.load:Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/cnn_dailymail.json\r\nINFO:filelock:Lock 140443095301136 released on /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\nINFO:nlp.info:Loading Dataset Infos from /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\nINFO:nlp.builder:Generating dataset cnn_dailymail (/u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0)\r\nINFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source\r\nDownloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0...\r\nINFO:nlp.utils.info_utils:All the checksums matched successfully.\r\nINFO:nlp.builder:Generating split train\r\nINFO:nlp.arrow_writer:Done writing 285161 examples in 1240618482 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-train.arrow.\r\nINFO:nlp.builder:Generating split validation\r\nINFO:nlp.arrow_writer:Done writing 13255 examples in 56637485 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-validation.arrow.\r\nINFO:nlp.builder:Generating split test\r\nINFO:nlp.arrow_writer:Done writing 11379 examples in 48931393 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-test.arrow.\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/load.py\", line 520, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py\", line 431, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py\", line 488, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/utils/info_utils.py\", line 70, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\nnlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}]\r\n```",
"> here's the log\r\n> \r\n> ```\r\n> >>> import nlp\r\n> import logging\r\n> logging.basicConfig(level=logging.INFO)\r\n> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\n> >>> import logging\r\n> >>> logging.basicConfig(level=logging.INFO)\r\n> >>> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\n> INFO:nlp.load:Checking /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py for additional imports.\r\n> INFO:filelock:Lock 140443095301136 acquired on /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\n> INFO:nlp.load:Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail\r\n> INFO:nlp.load:Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\n> INFO:nlp.load:Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py to /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/cnn_dailymail.py\r\n> INFO:nlp.load:Updating dataset infos file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/dataset_infos.json to /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/dataset_infos.json\r\n> INFO:nlp.load:Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/cnn_dailymail.json\r\n> INFO:filelock:Lock 140443095301136 released on /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\n> INFO:nlp.info:Loading Dataset Infos from /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\n> INFO:nlp.builder:Generating dataset cnn_dailymail (/u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0)\r\n> INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source\r\n> Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0...\r\n> INFO:nlp.utils.info_utils:All the checksums matched successfully.\r\n> INFO:nlp.builder:Generating split train\r\n> INFO:nlp.arrow_writer:Done writing 285161 examples in 1240618482 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-train.arrow.\r\n> INFO:nlp.builder:Generating split validation\r\n> INFO:nlp.arrow_writer:Done writing 13255 examples in 56637485 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-validation.arrow.\r\n> INFO:nlp.builder:Generating split test\r\n> INFO:nlp.arrow_writer:Done writing 11379 examples in 48931393 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-test.arrow.\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/load.py\", line 520, in load_dataset\r\n> builder_instance.download_and_prepare(\r\n> File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py\", line 431, in download_and_prepare\r\n> self._download_and_prepare(\r\n> File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py\", line 488, in _download_and_prepare\r\n> verify_splits(self.info.splits, split_dict)\r\n> File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/utils/info_utils.py\", line 70, in verify_splits\r\n> raise NonMatchingSplitsSizesError(str(bad_splits))\r\n> nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}]\r\n> ```\r\n\r\nWith `nlp == 0.3.0` version, I'm not able to reproduce this error on my side.\r\nWhich version are you using for reproducing your bug?\r\n\r\n```\r\n>> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\n\r\n8.90k/8.90k [00:18<00:00, 486B/s]\r\n\r\nDownloading: 100%\r\n9.37k/9.37k [00:00<00:00, 234kB/s]\r\n\r\nDownloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0...\r\nDownloading:\r\n159M/? [00:09<00:00, 16.7MB/s]\r\n\r\nDownloading:\r\n376M/? [00:06<00:00, 62.6MB/s]\r\n\r\nDownloading:\r\n2.11M/? [00:06<00:00, 333kB/s]\r\n\r\nDownloading:\r\n46.4M/? [00:02<00:00, 18.4MB/s]\r\n\r\nDownloading:\r\n2.43M/? [00:00<00:00, 2.62MB/s]\r\n\r\nDataset cnn_dailymail downloaded and prepared to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0. Subsequent calls will reuse this data.\r\n{'test': Dataset(schema: {'article': 'string', 'highlights': 'string'}, num_rows: 11490),\r\n 'train': Dataset(schema: {'article': 'string', 'highlights': 'string'}, num_rows: 287113),\r\n 'validation': Dataset(schema: {'article': 'string', 'highlights': 'string'}, num_rows: 13368)}\r\n\r\n>> ...\r\n\r\n```",
"In general if some examples are missing after processing (hence causing the `NonMatchingSplitsSizesError `), it is often due to either\r\n1) corrupted cached files\r\n2) decoding errors\r\n\r\nI just checked the dataset script for code that could lead to decoding errors but I couldn't find any. Before we try to dive more into the processing of the dataset, could you try to clear your cache ? Just to make sure that it isn't 1)",
"Yes thanks for the support! I cleared out my cache folder and everything works fine now"
] | 1,593,555,693,000 | 1,594,299,820,000 | 1,594,299,820,000 | CONTRIBUTOR | null | null | ```
>>> import nlp
>>> nlp.load_dataset('cnn_dailymail', '3.0.0')
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/p/qdata/jm8wx/datasets/nlp/src/nlp/load.py", line 520, in load_dataset
builder_instance.download_and_prepare(
File "/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py", line 431, in download_and_prepare
self._download_and_prepare(
File "/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py", line 488, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/p/qdata/jm8wx/datasets/nlp/src/nlp/utils/info_utils.py", line 70, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}]
``` | https://api.github.com/repos/huggingface/datasets/issues/331/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/330 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/330/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/330/comments | https://api.github.com/repos/huggingface/datasets/issues/330/events | https://github.com/huggingface/datasets/pull/330 | 648,525,720 | MDExOlB1bGxSZXF1ZXN0NDQyMzIxMjEw | 330 | Doc red | {
"login": "ghomasHudson",
"id": 13795113,
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghomasHudson",
"html_url": "https://github.com/ghomasHudson",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,593,554,731,000 | 1,594,037,439,000 | 1,593,952,049,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/330",
"html_url": "https://github.com/huggingface/datasets/pull/330",
"diff_url": "https://github.com/huggingface/datasets/pull/330.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/330.patch"
} | Adding [DocRED](https://github.com/thunlp/DocRED) - a relation extraction dataset which tests document-level RE. A few implementation notes:
- There are 2 separate versions of the training set - *annotated* and *distant*. Instead of `nlp.Split.Train` I've used the splits `"train_annotated"` and `"train_distant"` to reflect this.
- As well as the relation id, the full relation name is mapped from `rel_info.json`
- I renamed the 'h', 'r', 't' keys to 'head', 'relation' and 'tail' to make them more readable.
- Used the fix from #319 to allow nested sequences of dicts. | https://api.github.com/repos/huggingface/datasets/issues/330/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/329 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/329/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/329/comments | https://api.github.com/repos/huggingface/datasets/issues/329/events | https://github.com/huggingface/datasets/issues/329 | 648,446,979 | MDU6SXNzdWU2NDg0NDY5Nzk= | 329 | [Bug] FileLock dependency incompatible with filesystem | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi, can you give details on your environment/os/packages versions/etc?",
"Environment is Ubuntu 18.04, Python 3.7.5, nlp==0.3.0, filelock=3.0.12.\r\n\r\nThe external volume is Amazon FSx for Lustre, and it by default creates files with limited permissions. My working theory is that FileLock creates a lockfile that isn't writable, and thus there's no way to acquire it by removing the .lock file. But Python is able to create new files and write to them outside of the FileLock package.\r\n\r\nWhen I attempt to use FileLock within a Docker container by writing to `/root/.cache/hello.txt`, it succeeds. So there's some permissions issue. But it's not a Docker configuration issue; I've replicated it without Docker.\r\n```bash\r\necho \"hello world\" >> hello.txt\r\nls -l\r\n\r\n-rw-rw-r-- 1 ubuntu ubuntu 10 Jun 30 19:52 hello.txt\r\n```",
"Looks like the `flock` syscall does not work on Lustre filesystems by default: https://github.com/benediktschmitt/py-filelock/issues/67.\r\n\r\nI added the `-o flock` option when mounting the filesystem, as [described here](https://docs.aws.amazon.com/fsx/latest/LustreGuide/getting-started-step2.html), which fixed the issue.",
"Awesome, thanks a lot for sharing your fix!"
] | 1,593,546,331,000 | 1,593,586,558,000 | 1,593,552,786,000 | CONTRIBUTOR | null | null | I'm downloading a dataset successfully with
`load_dataset("wikitext", "wikitext-2-raw-v1")`
But when I attempt to cache it on an external volume, it hangs indefinitely:
`load_dataset("wikitext", "wikitext-2-raw-v1", cache_dir="/fsx") # /fsx is an external volume mount`
The filesystem when hanging looks like this:
```bash
/fsx
----downloads
----94be...73.lock
----wikitext
----wikitext-2-raw
----wikitext-2-raw-1.0.0.incomplete
```
It appears that on this filesystem, the FileLock object is forever stuck in its "acquire" stage. I have verified that the issue lies specifically with the `filelock` dependency:
```python
open("/fsx/hello.txt").write("hello") # succeeds
from filelock import FileLock
with FileLock("/fsx/hello.lock"):
open("/fsx/hello.txt").write("hello") # hangs indefinitely
```
Has anyone else run into this issue? I'd raise it directly on the FileLock repo, but that project appears abandoned with the last update over a year ago. Or if there's a solution that would remove the FileLock dependency from the project, I would appreciate that. | https://api.github.com/repos/huggingface/datasets/issues/329/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/328 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/328/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/328/comments | https://api.github.com/repos/huggingface/datasets/issues/328/events | https://github.com/huggingface/datasets/issues/328 | 648,326,841 | MDU6SXNzdWU2NDgzMjY4NDE= | 328 | Fork dataset | {
"login": "timothyjlaurent",
"id": 2000204,
"node_id": "MDQ6VXNlcjIwMDAyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timothyjlaurent",
"html_url": "https://github.com/timothyjlaurent",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",
"gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions",
"organizations_url": "https://api.github.com/users/timothyjlaurent/orgs",
"repos_url": "https://api.github.com/users/timothyjlaurent/repos",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"received_events_url": "https://api.github.com/users/timothyjlaurent/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"To be able to generate the Arrow dataset you need to either use our csv or json utilities `load_dataset(\"json\", data_files=my_json_files)` OR write your own custom dataset script (you can find some inspiration from the [squad](https://github.com/huggingface/nlp/blob/master/datasets/squad/squad.py) script for example). Custom dataset scripts can be called locally with `nlp.load_dataset(path_to_my_script_directory)`.\r\n\r\nThis should help you get what you call \"Dataset1\".\r\n\r\nThen using some dataset transforms like `.map` for example you can get to \"DatasetNER\" and \"DatasetREL\".\r\n",
"Thanks for the helpful advice, @lhoestq -- I wasn't quite able to get the json recipe working - \r\n\r\n```\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/pyarrow/ipc.py in __init__(self, source)\r\n 60 \r\n 61 def __init__(self, source):\r\n---> 62 self._open(source)\r\n 63 \r\n 64 \r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/pyarrow/ipc.pxi in pyarrow.lib._RecordBatchStreamReader._open()\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\nArrowInvalid: Tried reading schema message, was null or length 0\r\n```\r\n\r\nBut I'm going to give the generator_dataset_builder a try.\r\n\r\n1 more quick question -- can .map be used to output different length mappings -- could I skip one, or yield 2, can you map_batch ",
"You can use `.map(my_func, batched=True)` and return less examples, or more examples if you want",
"Thanks this answers my question. I think the issue I was having using the json loader were due to using gzipped jsonl files.\r\n\r\nThe error I get now is :\r\n\r\n```\r\n\r\nUsing custom data configuration test\r\n---------------------------------------------------------------------------\r\n\r\nValueError Traceback (most recent call last)\r\n\r\n<ipython-input-38-29082a31e5b2> in <module>\r\n 5 print(ner_datafiles)\r\n 6 \r\n----> 7 ds = nlp.load_dataset(\"json\", \"test\", data_files=ner_datafiles[0])\r\n 8 \r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 522 download_mode=download_mode,\r\n 523 ignore_verifications=ignore_verifications,\r\n--> 524 save_infos=save_infos,\r\n 525 )\r\n 526 \r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 430 verify_infos = not save_infos and not ignore_verifications\r\n 431 self._download_and_prepare(\r\n--> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 433 )\r\n 434 # Sync info\r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 481 try:\r\n 482 # Prepare split will record examples associated to the split\r\n--> 483 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 484 except OSError:\r\n 485 raise OSError(\"Cannot find data file. \" + (self.manual_download_instructions or \"\"))\r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _prepare_split(self, split_generator)\r\n 736 schema_dict[field.name] = Value(str(field.type))\r\n 737 \r\n--> 738 parse_schema(writer.schema, features)\r\n 739 self.info.features = Features(features)\r\n 740 \r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in parse_schema(schema, schema_dict)\r\n 734 parse_schema(field.type.value_type, schema_dict[field.name])\r\n 735 else:\r\n--> 736 schema_dict[field.name] = Value(str(field.type))\r\n 737 \r\n 738 parse_schema(writer.schema, features)\r\n\r\n<string> in __init__(self, dtype, id, _type)\r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in __post_init__(self)\r\n 55 \r\n 56 def __post_init__(self):\r\n---> 57 self.pa_type = string_to_arrow(self.dtype)\r\n 58 \r\n 59 def __call__(self):\r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in string_to_arrow(type_str)\r\n 32 if str(type_str + \"_\") not in pa.__dict__:\r\n 33 raise ValueError(\r\n---> 34 f\"Neither {type_str} nor {type_str + '_'} seems to be a pyarrow data type. \"\r\n 35 f\"Please make sure to use a correct data type, see: \"\r\n 36 f\"https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions\"\r\n\r\nValueError: Neither list<item: int64> nor list<item: int64>_ seems to be a pyarrow data type. Please make sure to use a correct data type, see: https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions.\r\n```\r\n\r\nIf I just create a pa- table manually like is done in the jsonloader -- it seems to work fine. Ths JSON I'm trying to load isn't overly complex - 1 integer field, the rest text fields with a nested list of objects with text fields .",
"I'll close this -- It's still unclear how to go about troubleshooting the json example as I mentioned above. If I decide it's worth the trouble, I'll create another issue, or wait for a better support for using nlp for making custom data-loaders."
] | 1,593,535,373,000 | 1,594,071,839,000 | 1,594,071,839,000 | NONE | null | null | We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset.
We're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers.
Our preprocessing flow parses raw text and json with Entity and Relations annotations and creates 2 datasets for training a NER and Relations prediction heads.
Is there some good way to "fork" dataset-
EG
1. text + json -> Dataset1
1. Dataset1 -> DatasetNER
1. Dataset1 -> DatasetREL
or
1. text + json -> Dataset1
1. Dataset1 -> DatasetNER
1. Dataset1 + DatasetNER -> DatasetREL
| https://api.github.com/repos/huggingface/datasets/issues/328/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/327 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/327/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/327/comments | https://api.github.com/repos/huggingface/datasets/issues/327/events | https://github.com/huggingface/datasets/pull/327 | 648,312,858 | MDExOlB1bGxSZXF1ZXN0NDQyMTQyOTQw | 327 | set seed for suffling tests | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,593,534,094,000 | 1,593,678,845,000 | 1,593,678,844,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/327",
"html_url": "https://github.com/huggingface/datasets/pull/327",
"diff_url": "https://github.com/huggingface/datasets/pull/327.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/327.patch"
} | Some tests were randomly failing because of a missing seed in a test for `train_test_split(shuffle=True)` | https://api.github.com/repos/huggingface/datasets/issues/327/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/326 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/326/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/326/comments | https://api.github.com/repos/huggingface/datasets/issues/326/events | https://github.com/huggingface/datasets/issues/326 | 648,126,103 | MDU6SXNzdWU2NDgxMjYxMDM= | 326 | Large dataset in Squad2-format | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I'm pretty sure you can get some inspiration from the squad_v2 script. It looks like the dataset is quite big so it will take some time for the users to generate it, but it should be reasonable.\r\n\r\nAlso you are saying that you are still making the dataset grow in size right ?\r\nIt's probably good practice to let the users do their training/evaluations with the exact same version of the dataset.\r\nWe allow for each dataset to specify a version (ex: 1.0.0) and increment this number every time there are new samples in the dataset for example. Does it look like a good solution for you ? Or would you rather have one final version with the full dataset ?",
"It would also be good if there is any possibility for versioning, I think this way is much better than the dynamic way.\nIf you mean that part to put the tiles into one is the generation it would take up to 15-20 minutes on home computer hardware.\nAre there any compression or optimization algorithms while generating the dataset ?\nOtherwise the hardware limit is around 32 GB ram at the moment.\nIf everything works well we will add some more gigabytes of data in future what would make it pretty memory costly.",
"15-20 minutes is fine !\r\nAlso there's no RAM limitations as we save to disk every 1000 elements while generating the dataset by default.\r\nAfter generation, the dataset is ready to use with (again) no RAM limitations as we do memory-mapping.",
"Wow, that sounds pretty cool.\nActually I have the problem of running out of memory while tokenization on our local machine.\nThat wouldn't happen again, would it ?",
"You can do the tokenization step using `my_tokenized_dataset = my_dataset.map(my_tokenize_function)` that writes the tokenized texts on disk as well. And then `my_tokenized_dataset` will be a memory-mapped dataset too, so you should be fine :)",
"Does it have an affect to the trainings speed ?",
"In your training loop, loading the tokenized texts is going to be fast and pretty much negligible compared to a forward pass. You shouldn't expect any slow down.",
"Closing this one. Feel free to re-open if you have other questions"
] | 1,593,519,539,000 | 1,594,285,310,000 | 1,594,285,310,000 | NONE | null | null | At the moment we are building an large question answering dataset and think about sharing it with the huggingface community.
Caused the computing power we splitted it into multiple tiles, but they are all in the same format.
Right now the most important facts about are this:
- Contexts: 1.047.671
- questions: 1.677.732
- Answers: 6.742.406
- unanswerable: 377.398
It is already cleaned
<pre><code>
train_data = [
{
'context': "this is the context",
'qas': [
{
'id': "00002",
'is_impossible': False,
'question': "whats is this",
'answers': [
{
'text': "answer",
'answer_start': 0
}
]
},
{
'id': "00003",
'is_impossible': False,
'question': "question2",
'answers': [
{
'text': "answer2",
'answer_start': 1
}
]
}
]
}
]
</code></pre>
Cause it is growing every day we are thinking about an structure like this:
We host an Json file, containing all the download links and the script can load it dynamically.
At the moment it is around ~20GB
Any advice how to handle this, or an ready to use template ? | https://api.github.com/repos/huggingface/datasets/issues/326/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/325 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/325/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/325/comments | https://api.github.com/repos/huggingface/datasets/issues/325/events | https://github.com/huggingface/datasets/pull/325 | 647,601,592 | MDExOlB1bGxSZXF1ZXN0NDQxNTk3NTgw | 325 | Add SQuADShifts dataset | {
"login": "millerjohnp",
"id": 8953195,
"node_id": "MDQ6VXNlcjg5NTMxOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8953195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/millerjohnp",
"html_url": "https://github.com/millerjohnp",
"followers_url": "https://api.github.com/users/millerjohnp/followers",
"following_url": "https://api.github.com/users/millerjohnp/following{/other_user}",
"gists_url": "https://api.github.com/users/millerjohnp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/millerjohnp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/millerjohnp/subscriptions",
"organizations_url": "https://api.github.com/users/millerjohnp/orgs",
"repos_url": "https://api.github.com/users/millerjohnp/repos",
"events_url": "https://api.github.com/users/millerjohnp/events{/privacy}",
"received_events_url": "https://api.github.com/users/millerjohnp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Very cool to have this dataset, thank you for adding it :)"
] | 1,593,457,876,000 | 1,593,536,851,000 | 1,593,536,851,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/325",
"html_url": "https://github.com/huggingface/datasets/pull/325",
"diff_url": "https://github.com/huggingface/datasets/pull/325.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/325.patch"
} | This PR adds the four new variants of the SQuAD dataset used in [The Effect of Natural Distribution Shift on Question Answering Models](https://arxiv.org/abs/2004.14444) to facilitate evaluating model robustness to distribution shift. | https://api.github.com/repos/huggingface/datasets/issues/325/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/324 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/324/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/324/comments | https://api.github.com/repos/huggingface/datasets/issues/324/events | https://github.com/huggingface/datasets/issues/324 | 647,525,725 | MDU6SXNzdWU2NDc1MjU3MjU= | 324 | Error when calculating glue score | {
"login": "D-i-l-r-u-k-s-h-i",
"id": 47185867,
"node_id": "MDQ6VXNlcjQ3MTg1ODY3",
"avatar_url": "https://avatars.githubusercontent.com/u/47185867?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i",
"html_url": "https://github.com/D-i-l-r-u-k-s-h-i",
"followers_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/followers",
"following_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/following{/other_user}",
"gists_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/gists{/gist_id}",
"starred_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/subscriptions",
"organizations_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/orgs",
"repos_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/repos",
"events_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/events{/privacy}",
"received_events_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The glue metric for cola is a metric for classification. It expects label ids as integers as inputs.",
"I want to evaluate a sentence pair whether they are semantically equivalent, so I used MRPC and it gives the same error, does that mean we have to encode the sentences and parse as input?\r\n\r\nusing BertTokenizer;\r\n```\r\nencoded_reference=tokenizer.encode(reference, add_special_tokens=False)\r\nencoded_prediction=tokenizer.encode(prediction, add_special_tokens=False)\r\n```\r\n\r\n`glue_score = glue_metric.compute(encoded_prediction, encoded_reference)`\r\n```\r\n\r\nValueError Traceback (most recent call last)\r\n<ipython-input-9-4c3a3ce7b583> in <module>()\r\n----> 1 glue_score = glue_metric.compute(encoded_prediction, encoded_reference)\r\n\r\n6 frames\r\n/usr/local/lib/python3.6/dist-packages/nlp/metric.py in compute(self, predictions, references, timeout, **metrics_kwargs)\r\n 198 predictions = self.data[\"predictions\"]\r\n 199 references = self.data[\"references\"]\r\n--> 200 output = self._compute(predictions=predictions, references=references, **metrics_kwargs)\r\n 201 return output\r\n 202 \r\n\r\n/usr/local/lib/python3.6/dist-packages/nlp/metrics/glue/27b1bc63e520833054bd0d7a8d0bc7f6aab84cc9eed1b576e98c806f9466d302/glue.py in _compute(self, predictions, references)\r\n 101 return pearson_and_spearman(predictions, references)\r\n 102 elif self.config_name in [\"mrpc\", \"qqp\"]:\r\n--> 103 return acc_and_f1(predictions, references)\r\n 104 elif self.config_name in [\"sst2\", \"mnli\", \"mnli_mismatched\", \"mnli_matched\", \"qnli\", \"rte\", \"wnli\", \"hans\"]:\r\n 105 return {\"accuracy\": simple_accuracy(predictions, references)}\r\n\r\n/usr/local/lib/python3.6/dist-packages/nlp/metrics/glue/27b1bc63e520833054bd0d7a8d0bc7f6aab84cc9eed1b576e98c806f9466d302/glue.py in acc_and_f1(preds, labels)\r\n 60 def acc_and_f1(preds, labels):\r\n 61 acc = simple_accuracy(preds, labels)\r\n---> 62 f1 = f1_score(y_true=labels, y_pred=preds)\r\n 63 return {\r\n 64 \"accuracy\": acc,\r\n\r\n/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in f1_score(y_true, y_pred, labels, pos_label, average, sample_weight, zero_division)\r\n 1097 pos_label=pos_label, average=average,\r\n 1098 sample_weight=sample_weight,\r\n-> 1099 zero_division=zero_division)\r\n 1100 \r\n 1101 \r\n\r\n/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in fbeta_score(y_true, y_pred, beta, labels, pos_label, average, sample_weight, zero_division)\r\n 1224 warn_for=('f-score',),\r\n 1225 sample_weight=sample_weight,\r\n-> 1226 zero_division=zero_division)\r\n 1227 return f\r\n 1228 \r\n\r\n/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in precision_recall_fscore_support(y_true, y_pred, beta, labels, pos_label, average, warn_for, sample_weight, zero_division)\r\n 1482 raise ValueError(\"beta should be >=0 in the F-beta score\")\r\n 1483 labels = _check_set_wise_labels(y_true, y_pred, average, labels,\r\n-> 1484 pos_label)\r\n 1485 \r\n 1486 # Calculate tp_sum, pred_sum, true_sum ###\r\n\r\n/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in _check_set_wise_labels(y_true, y_pred, average, labels, pos_label)\r\n 1314 raise ValueError(\"Target is %s but average='binary'. Please \"\r\n 1315 \"choose another average setting, one of %r.\"\r\n-> 1316 % (y_type, average_options))\r\n 1317 elif pos_label not in (None, 1):\r\n 1318 warnings.warn(\"Note that pos_label (set to %r) is ignored when \"\r\n\r\nValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].\r\n\r\n```",
"MRPC is also a binary classification task, so its metric is a binary classification metric.\r\n\r\nTo evaluate if pairs of sentences are semantically equivalent, maybe you could take a look at models that compute if one sentence entails the other or not (typically the kinds of model that could work well on the MRPC task).",
"Closing this one. Feel free to re-open if you have other questions :)"
] | 1,593,449,628,000 | 1,594,286,014,000 | 1,594,286,014,000 | NONE | null | null | I was trying glue score along with other metrics here. But glue gives me this error;
```
import nlp
glue_metric = nlp.load_metric('glue',name="cola")
glue_score = glue_metric.compute(predictions, references)
```
```
---------------------------------------------------------------------------
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-8-b9210a524504> in <module>()
----> 1 glue_score = glue_metric.compute(predictions, references)
6 frames
/usr/local/lib/python3.6/dist-packages/nlp/metric.py in compute(self, predictions, references, timeout, **metrics_kwargs)
191 """
192 if predictions is not None:
--> 193 self.add_batch(predictions=predictions, references=references)
194 self.finalize(timeout=timeout)
195
/usr/local/lib/python3.6/dist-packages/nlp/metric.py in add_batch(self, predictions, references, **kwargs)
207 if self.writer is None:
208 self._init_writer()
--> 209 self.writer.write_batch(batch)
210
211 def add(self, prediction=None, reference=None, **kwargs):
/usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
155 if self.pa_writer is None:
156 self._build_writer(pa_table=pa.Table.from_pydict(batch_examples))
--> 157 pa_table: pa.Table = pa.Table.from_pydict(batch_examples, schema=self._schema)
158 if writer_batch_size is None:
159 writer_batch_size = self.writer_batch_size
/usr/local/lib/python3.6/dist-packages/pyarrow/types.pxi in __iter__()
/usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray()
/usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()
/usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()
TypeError: an integer is required (got type str)
```
I'm not sure whether I'm doing this wrong or whether it's an issue. I would like to know a workaround. Thank you. | https://api.github.com/repos/huggingface/datasets/issues/324/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/323 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/323/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/323/comments | https://api.github.com/repos/huggingface/datasets/issues/323/events | https://github.com/huggingface/datasets/pull/323 | 647,521,308 | MDExOlB1bGxSZXF1ZXN0NDQxNTMxOTY3 | 323 | Add package path to sys when downloading package as github archive | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Sorry for the long diff, everything after the imports comes from `black` for code quality :/ ",
" I think it's fine and I can't think of another way to make the import work anyways.\r\n\r\nMaybe we can have the `sys.path` behavior inside `prepare_module` instead ? Currently it seems to come out of nowhere in the code ^^'\r\nWe could check if external imports have a `__init__.py` and if it is the case then we can add to directory to the `PYTHONPATH`"
] | 1,593,449,161,000 | 1,596,117,623,000 | 1,596,117,623,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/323",
"html_url": "https://github.com/huggingface/datasets/pull/323",
"diff_url": "https://github.com/huggingface/datasets/pull/323.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/323.patch"
} | This fixes the `coval.py` metric so that imports within the downloaded module work correctly. We can use a similar trick to add the BLEURT metric (@ankparikh)
@thomwolf not sure how you feel about adding to the `PYTHONPATH` from the script. This is the only way I could make it work with my understanding of `importlib` but there might be a more elegant method.
This PR fixes https://github.com/huggingface/nlp/issues/305 | https://api.github.com/repos/huggingface/datasets/issues/323/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/322 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/322/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/322/comments | https://api.github.com/repos/huggingface/datasets/issues/322/events | https://github.com/huggingface/datasets/pull/322 | 647,483,850 | MDExOlB1bGxSZXF1ZXN0NDQxNTAyMjc2 | 322 | output nested dict in get_nearest_examples | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,593,445,667,000 | 1,593,678,813,000 | 1,593,678,812,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/322",
"html_url": "https://github.com/huggingface/datasets/pull/322",
"diff_url": "https://github.com/huggingface/datasets/pull/322.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/322.patch"
} | As we are using a columnar format like arrow as the backend for datasets, we expect to have a dictionary of columns when we slice a dataset like in this example:
```python
my_examples = dataset[0:10]
print(type(my_examples))
# >>> dict
print(my_examples["my_column"][0]
# >>> this is the first element of the column 'my_column'
```
Therefore I wanted to keep this logic when calling `get_nearest_examples` that returns the top 10 nearest examples:
```python
dataset.add_faiss_index(column="embeddings")
scores, examples = dataset.get_nearest_examples("embeddings", query=my_numpy_embedding)
print(type(examples))
# >>> dict
```
Previously it was returning a list[dict]. It was the only place that was using this output format.
To make it work I had to implement `__getitem__(key)` where `key` is a list.
This is different from `.select` because `.select` is a dataset transform (it returns a new dataset object) while `__getitem__` is an extraction method (it returns python dictionaries). | https://api.github.com/repos/huggingface/datasets/issues/322/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/321 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/321/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/321/comments | https://api.github.com/repos/huggingface/datasets/issues/321/events | https://github.com/huggingface/datasets/issues/321 | 647,271,526 | MDU6SXNzdWU2NDcyNzE1MjY= | 321 | ERROR:root:mwparserfromhell | {
"login": "Shiro-LK",
"id": 26505641,
"node_id": "MDQ6VXNlcjI2NTA1NjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26505641?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shiro-LK",
"html_url": "https://github.com/Shiro-LK",
"followers_url": "https://api.github.com/users/Shiro-LK/followers",
"following_url": "https://api.github.com/users/Shiro-LK/following{/other_user}",
"gists_url": "https://api.github.com/users/Shiro-LK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shiro-LK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shiro-LK/subscriptions",
"organizations_url": "https://api.github.com/users/Shiro-LK/orgs",
"repos_url": "https://api.github.com/users/Shiro-LK/repos",
"events_url": "https://api.github.com/users/Shiro-LK/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shiro-LK/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [] | null | [
"It looks like it comes from `mwparserfromhell`.\r\n\r\nWould it be possible to get the bad `section` that causes this issue ? The `section` string is from `datasets/wikipedia.py:L548` ? You could just add a `try` statement and print the section if the line `section_text.append(section.strip_code().strip())` crashes.\r\n\r\nIt will help us know if we have to fix it on our side or if it is a `mwparserfromhell` issue.",
"Hi, \r\n\r\nThank you for you answer.\r\nI have try to print the bad section using `try` and `except`, but it is a bit weird as the error seems to appear 3 times for instance, but the two first error does not print anything (as if the function did not go in the `except` part).\r\nFor the third one, I got that (I haven't display the entire text) :\r\n\r\n> error : ==== Parque nacional Cajas ====\r\n> {{AP|Parque nacional Cajas}}\r\n> [[Archivo:Ecuador cajas national park.jpg|thumb|left|300px|Laguna del Cajas]]\r\n> El parque nacional Cajas está situado en los [[Cordillera de los Andes|Andes]], al sur del [[Ecuador]], en la provincia de [[Provincia de Azuay|Azuay]], a 33\r\n> [[km]] al noroccidente de la ciudad de [[Cuenca (Ecuador)|Cuenca]]. Los accesos más comunes al parque inician todos en Cuenca: Desde allí, la vía Cuenca-Mol\r\n> leturo atraviesa en Control de [[Surocucho]] en poco más de 30 minutos de viaje; más adelante, esta misma carretera pasa a orillas de la laguna La Toreadora donde están el Centro Administrativo y de Información del parque. Siguiendo de largo hacia [[Molleturo]], por esta vía se conoce el sector norte del Cajas y se serpentea entre varias lagunas mayores y menores.\r\n> Para acceder al parque desde la costa, la vía Molleturo-Cuenca es también la mejor opción.\r\n\r\nHow can I display the link instead of the text ? I suppose it will help you more ",
"The error appears several times as Apache Beam retries to process examples up to 4 times irc.\r\n\r\nI just tried to run this text into `mwparserfromhell` but it worked without the issue.\r\n\r\nI used this code (from the `wikipedia.py` script):\r\n```python\r\nimport mwparserfromhell as parser\r\nimport re\r\nimport six\r\n\r\nraw_content = r\"\"\"==== Parque nacional Cajas ====\r\n{{AP|Parque nacional Cajas}}\r\n[[Archivo:Ecuador cajas national park.jpg|thumb|left|300px|Laguna del Cajas]]\r\nEl parque nacional Cajas está situado en los [[Cordillera de los Andes|Andes]], al sur del [[Ecuador]], en la provincia de [[Provincia de Azuay|Azuay]], a 33\r\n[[km]] al noroccidente de la ciudad de [[Cuenca (Ecuador)|Cuenca]]. Los accesos más comunes al parque inician todos en Cuenca: Desde allí, la vía Cuenca-Mol\r\nleturo atraviesa en Control de [[Surocucho]] en poco más de 30 minutos de viaje; más adelante, esta misma carretera pasa a orillas de la laguna La Toreadora donde están el Centro Administrativo y de Información del parque. Siguiendo de largo hacia [[Molleturo]], por esta vía se conoce el sector norte del Cajas y se serpentea entre varias lagunas mayores y menores.\r\n\"\"\"\r\n\r\nwikicode = parser.parse(raw_content)\r\n\r\n# Filters for references, tables, and file/image links.\r\nre_rm_wikilink = re.compile(\"^(?:File|Image|Media):\", flags=re.IGNORECASE | re.UNICODE)\r\n\r\ndef rm_wikilink(obj):\r\n return bool(re_rm_wikilink.match(six.text_type(obj.title)))\r\n\r\ndef rm_tag(obj):\r\n return six.text_type(obj.tag) in {\"ref\", \"table\"}\r\n\r\ndef rm_template(obj):\r\n return obj.name.lower() in {\"reflist\", \"notelist\", \"notelist-ua\", \"notelist-lr\", \"notelist-ur\", \"notelist-lg\"}\r\n\r\ndef try_remove_obj(obj, section):\r\n try:\r\n section.remove(obj)\r\n except ValueError:\r\n # For unknown reasons, objects are sometimes not found.\r\n pass\r\n\r\nsection_text = []\r\nfor section in wikicode.get_sections(flat=True, include_lead=True, include_headings=True):\r\n for obj in section.ifilter_wikilinks(matches=rm_wikilink, recursive=True):\r\n try_remove_obj(obj, section)\r\n for obj in section.ifilter_templates(matches=rm_template, recursive=True):\r\n try_remove_obj(obj, section)\r\n for obj in section.ifilter_tags(matches=rm_tag, recursive=True):\r\n try_remove_obj(obj, section)\r\n\r\n section_text.append(section.strip_code().strip())\r\n```",
"Not sure why we're having this issue. Maybe could you get also the file that's causing that ?",
"thanks for your answer.\r\nHow can I know which file is causing the issue ? \r\nI am trying to load the spanish wikipedia data. ",
"Because of the way Apache Beam works we indeed don't have access to the file name at this point in the code.\r\nWe'll have to use some tricks I think :p \r\n\r\nYou can append `filepath` to `title` in `wikipedia.py:L512` for example. [[EDIT: it's L494 my bad]]\r\nThen just do `try:...except:` on the call of `_parse_and_clean_wikicode` L500 I guess.\r\n\r\nThanks for diving into this ! I tried it myself but I run out of memory on my laptop\r\nAs soon as we have the name of the file it should be easier to find what's wrong.",
"Thanks for your help.\r\n\r\nI tried to print the \"title\" of the document inside the` except (mwparserfromhell.parser.ParserError) as e`,the title displayed was : \"Campeonato Mundial de futsal de la AMF 2015\". (Wikipedia ES) Is it what you were looking for ?",
"Thanks a lot @Shiro-LK !\r\n\r\nI was able to reproduce the issue. It comes from [this table on wikipedia](https://es.wikipedia.org/wiki/Campeonato_Mundial_de_futsal_de_la_AMF_2015#Clasificados) that can't be parsed.\r\n\r\nThe file in which the problem occurs comes from the wikipedia dumps, and it can be downloaded [here](https://dumps.wikimedia.org/eswiki/20200501/eswiki-20200501-pages-articles-multistream6.xml-p6424816p7924815.bz2)\r\n\r\nParsing the file this way raises the parsing issue:\r\n\r\n```python\r\nimport mwparserfromhell as parser\r\nfrom tqdm.auto import tqdm\r\nimport bz2\r\nimport six\r\nimport logging\r\nimport codecs\r\nimport xml.etree.cElementTree as etree\r\n\r\nfilepath = \"path/to/eswiki-20200501-pages-articles-multistream6.xml-p6424816p7924815.bz2\"\r\n\r\ndef _extract_content(filepath):\r\n \"\"\"Extracts article content from a single WikiMedia XML file.\"\"\"\r\n logging.info(\"generating examples from = %s\", filepath)\r\n with open(filepath, \"rb\") as f:\r\n f = bz2.BZ2File(filename=f)\r\n if six.PY3:\r\n # Workaround due to:\r\n # https://github.com/tensorflow/tensorflow/issues/33563\r\n utf_f = codecs.getreader(\"utf-8\")(f)\r\n else:\r\n utf_f = f\r\n # To clear root, to free-up more memory than just `elem.clear()`.\r\n context = etree.iterparse(utf_f, events=(\"end\",))\r\n context = iter(context)\r\n unused_event, root = next(context)\r\n for unused_event, elem in tqdm(context, total=949087):\r\n if not elem.tag.endswith(\"page\"):\r\n continue\r\n namespace = elem.tag[:-4]\r\n title = elem.find(\"./{0}title\".format(namespace)).text\r\n ns = elem.find(\"./{0}ns\".format(namespace)).text\r\n id_ = elem.find(\"./{0}id\".format(namespace)).text\r\n # Filter pages that are not in the \"main\" namespace.\r\n if ns != \"0\":\r\n root.clear()\r\n continue\r\n raw_content = elem.find(\"./{0}revision/{0}text\".format(namespace)).text\r\n root.clear()\r\n\r\n if \"Campeonato Mundial de futsal de la AMF 2015\" in title:\r\n yield (id_, title, raw_content)\r\n\r\nfor id_, title, raw_content in _extract_content(filepath):\r\n wikicode = parser.parse(raw_content)\r\n```\r\n\r\nThe copied the raw content that can't be parsed [here](https://pastebin.com/raw/ZbmevLyH).\r\n\r\nThe minimal code to reproduce is:\r\n```python\r\nimport mwparserfromhell as parser\r\nimport requests\r\n\r\nraw_content = requests.get(\"https://pastebin.com/raw/ZbmevLyH\").content.decode(\"utf-8\")\r\nwikicode = parser.parse(raw_content)\r\n\r\n```\r\n\r\nI will create an issue on mwparserfromhell's repo to see if we can fix that\r\n",
"This going to be fixed in the next `mwparserfromhell` release :)"
] | 1,593,429,043,000 | 1,595,521,714,000 | null | NONE | null | null | Hi,
I am trying to download some wikipedia data but I got this error for spanish "es" (but there are maybe some others languages which have the same error I haven't tried all of them ).
`ERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.`
The code I have use was :
`dataset = load_dataset('wikipedia', '20200501.es', beam_runner='DirectRunner')`
| https://api.github.com/repos/huggingface/datasets/issues/321/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/320 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/320/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/320/comments | https://api.github.com/repos/huggingface/datasets/issues/320/events | https://github.com/huggingface/datasets/issues/320 | 647,188,167 | MDU6SXNzdWU2NDcxODgxNjc= | 320 | Blog Authorship Corpus, Non Matching Splits Sizes Error, nlp viewer | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2107841032,
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer",
"name": "nlp-viewer",
"color": "94203D",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"I wonder if this means downloading failed? That corpus has a really slow server.",
"This dataset seems to have a decoding problem that results in inconsistencies in the number of generated examples.\r\nSee #215.\r\nThat's why we end up with a `NonMatchingSplitsSizesError `."
] | 1,593,416,195,000 | 1,593,441,882,000 | 1,593,441,882,000 | CONTRIBUTOR | null | null | Selecting `blog_authorship_corpus` in the nlp viewer throws the following error:
```
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train', num_bytes=614706451, num_examples=535568, dataset_name='blog_authorship_corpus')}, {'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation', num_bytes=32553710, num_examples=28521, dataset_name='blog_authorship_corpus')}]
Traceback:
File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp-viewer/run.py", line 172, in <module>
dts, fail = get(str(option.id), str(conf_option.name) if conf_option else None)
File "/home/sasha/streamlit/lib/streamlit/caching.py", line 591, in wrapped_func
return get_or_create_cached_value()
File "/home/sasha/streamlit/lib/streamlit/caching.py", line 575, in get_or_create_cached_value
return_value = func(*args, **kwargs)
File "/home/sasha/nlp-viewer/run.py", line 132, in get
builder_instance.download_and_prepare()
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 488, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 70, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
```
@srush @lhoestq | https://api.github.com/repos/huggingface/datasets/issues/320/timeline | null | false |