url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.1B
node_id
stringlengths
18
32
number
int64
1
3.58k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
int64
1,587B
1,642B
updated_at
int64
1,587B
1,642B
closed_at
int64
1,587B
1,642B
author_association
stringclasses
3 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/2669
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2669/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2669/comments
https://api.github.com/repos/huggingface/datasets/issues/2669/events
https://github.com/huggingface/datasets/issues/2669
946,982,998
MDU6SXNzdWU5NDY5ODI5OTg=
2,669
Metric kwargs are not passed to underlying external metric f1_score
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @BramVanroy, thanks for reporting.\r\n\r\nFirst, note that `\"min\"` is not an allowed value for `average`. According to scikit-learn [documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html), `average` can only take the values: `{\"micro\", \"macro\", \"samples\", \"weighted\", \"binary\"} or None, default=\"binary\"`.\r\n\r\nSecond, you should take into account that all additional metric-specific argument should be passed in the method `compute` (and not in the method `load_metric`). You can find more information in our documentation: https://huggingface.co/docs/datasets/using_metrics.html#computing-the-metric-scores\r\n\r\nSo for example, if you would like to calculate the macro-averaged F1 score, you should use:\r\n```python\r\nimport datasets\r\n\r\nf1 = datasets.load_metric(\"f1\", keep_in_memory=True)\r\nf1.add_batch(predictions=[0,2,3], references=[1, 2, 3])\r\nf1.compute(average=\"macro\")\r\n```", "Thanks, that was it. A bit strange though, since `load_metric` had an argument `metric_init_kwargs`. I assume that that's for specific initialisation arguments whereas `average` is for the function itself." ]
1,626,597,151,000
1,626,633,365,000
1,626,607,144,000
CONTRIBUTOR
null
null
null
## Describe the bug When I want to use F1 score with average="min", this keyword argument does not seem to be passed through to the underlying sklearn metric. This is evident because [sklearn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) throws an error telling me so. ## Steps to reproduce the bug ```python import datasets f1 = datasets.load_metric("f1", keep_in_memory=True, average="min") f1.add_batch(predictions=[0,2,3], references=[1, 2, 3]) f1.compute() ``` ## Expected results No error, because `average="min"` should be passed correctly to f1_score in sklearn. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\datasets\metric.py", line 402, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "C:\Users\bramv\.cache\huggingface\modules\datasets_modules\metrics\f1\82177930a325d4c28342bba0f116d73f6d92fb0c44cd67be32a07c1262b61cfe\f1.py", line 97, in _compute "f1": f1_score( File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f return f(*args, **kwargs) File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1071, in f1_score return fbeta_score(y_true, y_pred, beta=1, labels=labels, File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f return f(*args, **kwargs) File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1195, in fbeta_score _, _, f, _ = precision_recall_fscore_support(y_true, y_pred, File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f return f(*args, **kwargs) File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1464, in precision_recall_fscore_support labels = _check_set_wise_labels(y_true, y_pred, average, labels, File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1294, in _check_set_wise_labels raise ValueError("Target is %s but average='binary'. Please " ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted']. ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.9.2 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2669/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2669/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2668
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2668/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2668/comments
https://api.github.com/repos/huggingface/datasets/issues/2668/events
https://github.com/huggingface/datasets/pull/2668
946,867,622
MDExOlB1bGxSZXF1ZXN0NjkxOTY1MTY1
2,668
Add Russian SuperGLUE
{ "login": "slowwavesleep", "id": 44175589, "node_id": "MDQ6VXNlcjQ0MTc1NTg5", "avatar_url": "https://avatars.githubusercontent.com/u/44175589?v=4", "gravatar_id": "", "url": "https://api.github.com/users/slowwavesleep", "html_url": "https://github.com/slowwavesleep", "followers_url": "https://api.github.com/users/slowwavesleep/followers", "following_url": "https://api.github.com/users/slowwavesleep/following{/other_user}", "gists_url": "https://api.github.com/users/slowwavesleep/gists{/gist_id}", "starred_url": "https://api.github.com/users/slowwavesleep/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/slowwavesleep/subscriptions", "organizations_url": "https://api.github.com/users/slowwavesleep/orgs", "repos_url": "https://api.github.com/users/slowwavesleep/repos", "events_url": "https://api.github.com/users/slowwavesleep/events{/privacy}", "received_events_url": "https://api.github.com/users/slowwavesleep/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Added the missing label classes and their explanations (to the best of my understanding)", "Thanks a lot ! Once the last comment about the label names is addressed we can merge :)" ]
1,626,543,688,000
1,627,559,431,000
1,627,559,431,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2668", "html_url": "https://github.com/huggingface/datasets/pull/2668", "diff_url": "https://github.com/huggingface/datasets/pull/2668.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2668.patch", "merged_at": 1627559430000 }
Hi, This adds the [Russian SuperGLUE](https://russiansuperglue.com/) dataset. For the most part I reused the code for the original SuperGLUE, although there are some relatively minor differences in the structure that I accounted for.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2668/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2668/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2667
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2667/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2667/comments
https://api.github.com/repos/huggingface/datasets/issues/2667/events
https://github.com/huggingface/datasets/pull/2667
946,861,908
MDExOlB1bGxSZXF1ZXN0NjkxOTYwNzc3
2,667
Use tqdm from tqdm_utils
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The current CI failure is due to modifications in the dataset script.", "Merging since the CI is only failing because of dataset card issues, which is unrelated to this PR" ]
1,626,541,595,000
1,626,716,350,000
1,626,715,920,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2667", "html_url": "https://github.com/huggingface/datasets/pull/2667", "diff_url": "https://github.com/huggingface/datasets/pull/2667.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2667.patch", "merged_at": 1626715920000 }
This PR replaces `tqdm` from the `tqdm` lib with `tqdm` from `datasets.utils.tqdm_utils`. With this change, it's possible to disable progress bars just by calling `disable_progress_bar`. Note this doesn't work on Windows when using multiprocessing due to how global variables are shared between processes. Currently, there is no easy way to disable progress bars in a multiprocess setting on Windows (patching logging with `datasets.utils.logging.get_verbosity = lambda: datasets.utils.logging.NOTSET` doesn't seem to work as well), so adding support for this is a future goal. Additionally, this PR adds a unit ("ba" for batches) to the bar printed by `Dataset.to_json` (this change is motivated by https://github.com/huggingface/datasets/issues/2657).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2667/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2667/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2666
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2666/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2666/comments
https://api.github.com/repos/huggingface/datasets/issues/2666/events
https://github.com/huggingface/datasets/pull/2666
946,825,140
MDExOlB1bGxSZXF1ZXN0NjkxOTMzMDM1
2,666
Adds CodeClippy dataset [WIP]
{ "login": "arampacha", "id": 69807323, "node_id": "MDQ6VXNlcjY5ODA3MzIz", "avatar_url": "https://avatars.githubusercontent.com/u/69807323?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arampacha", "html_url": "https://github.com/arampacha", "followers_url": "https://api.github.com/users/arampacha/followers", "following_url": "https://api.github.com/users/arampacha/following{/other_user}", "gists_url": "https://api.github.com/users/arampacha/gists{/gist_id}", "starred_url": "https://api.github.com/users/arampacha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arampacha/subscriptions", "organizations_url": "https://api.github.com/users/arampacha/orgs", "repos_url": "https://api.github.com/users/arampacha/repos", "events_url": "https://api.github.com/users/arampacha/events{/privacy}", "received_events_url": "https://api.github.com/users/arampacha/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,626,528,724,000
1,626,685,794,000
null
NONE
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2666", "html_url": "https://github.com/huggingface/datasets/pull/2666", "diff_url": "https://github.com/huggingface/datasets/pull/2666.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2666.patch", "merged_at": null }
CodeClippy is an opensource code dataset scrapped from github during flax-jax-community-week https://the-eye.eu/public/AI/training_data/code_clippy_data/
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2666/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2666/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2665
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2665/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2665/comments
https://api.github.com/repos/huggingface/datasets/issues/2665/events
https://github.com/huggingface/datasets/pull/2665
946,822,036
MDExOlB1bGxSZXF1ZXN0NjkxOTMwNjky
2,665
Adds APPS dataset to the hub [WIP]
{ "login": "arampacha", "id": 69807323, "node_id": "MDQ6VXNlcjY5ODA3MzIz", "avatar_url": "https://avatars.githubusercontent.com/u/69807323?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arampacha", "html_url": "https://github.com/arampacha", "followers_url": "https://api.github.com/users/arampacha/followers", "following_url": "https://api.github.com/users/arampacha/following{/other_user}", "gists_url": "https://api.github.com/users/arampacha/gists{/gist_id}", "starred_url": "https://api.github.com/users/arampacha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arampacha/subscriptions", "organizations_url": "https://api.github.com/users/arampacha/orgs", "repos_url": "https://api.github.com/users/arampacha/repos", "events_url": "https://api.github.com/users/arampacha/events{/privacy}", "received_events_url": "https://api.github.com/users/arampacha/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,626,527,597,000
1,626,544,607,000
null
NONE
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2665", "html_url": "https://github.com/huggingface/datasets/pull/2665", "diff_url": "https://github.com/huggingface/datasets/pull/2665.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2665.patch", "merged_at": null }
A loading script for [APPS dataset](https://github.com/hendrycks/apps)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2665/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2665/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2663
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2663/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2663/comments
https://api.github.com/repos/huggingface/datasets/issues/2663/events
https://github.com/huggingface/datasets/issues/2663
946,552,273
MDU6SXNzdWU5NDY1NTIyNzM=
2,663
[`to_json`] add multi-proc sharding support
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi @stas00, \r\nI want to work on this issue and I was thinking why don't we use `imap` [in this loop](https://github.com/huggingface/datasets/blob/440b14d0dd428ae1b25881aa72ba7bbb8ad9ff84/src/datasets/io/json.py#L99)? This way, using offset (which is being used to slice the pyarrow table) we can convert pyarrow table to `json` using multiprocessing. I've a small code snippet for some clarity:\r\n```\r\nresult = list(\r\n pool.imap(self._apply_df, [(offset, batch_size) for offset in range(0, len(self.dataset), batch_size)])\r\n )\r\n```\r\n`_apply_df` is a function which will return `batch.to_pandas().to_json(path_or_buf=None, orient=\"records\", lines=True)` which is basically json version of the batched pyarrow table. Later on we can concatenate it to form json file? \r\n\r\nI think the only downside here is to write file from `imap` output (output would be a list and we'll need to iterate over it and write in a file) which might add a little overhead cost. What do you think about this?", "Followed up in https://github.com/huggingface/datasets/pull/2747" ]
1,626,464,510,000
1,631,541,397,000
1,631,541,397,000
CONTRIBUTOR
null
null
null
As discussed on slack it appears that `to_json` is quite slow on huge datasets like OSCAR. I implemented sharded saving, which is much much faster - but the tqdm bars all overwrite each other, so it's hard to make sense of the progress, so if possible ideally this multi-proc support could be implemented internally in `to_json` via `num_proc` argument. I guess `num_proc` will be the number of shards? I think the user will need to use this feature wisely, since too many processes writing to say normal style HD is likely to be slower than one process. I'm not sure whether the user should be responsible to concatenate the shards at the end or `datasets`, either way works for my needs. The code I was using: ``` from multiprocessing import cpu_count, Process, Queue [...] filtered_dataset = concat_dataset.map(filter_short_documents, batched=True, batch_size=256, num_proc=cpu_count()) DATASET_NAME = "oscar" SHARDS = 10 def process_shard(idx): print(f"Sharding {idx}") ds_shard = filtered_dataset.shard(SHARDS, idx, contiguous=True) # ds_shard = ds_shard.shuffle() # remove contiguous=True above if shuffling print(f"Saving {DATASET_NAME}-{idx}.jsonl") ds_shard.to_json(f"{DATASET_NAME}-{idx}.jsonl", orient="records", lines=True, force_ascii=False) queue = Queue() processes = [Process(target=process_shard, args=(idx,)) for idx in range(SHARDS)] for p in processes: p.start() for p in processes: p.join() ``` Thank you! @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2663/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2663/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2662
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2662/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2662/comments
https://api.github.com/repos/huggingface/datasets/issues/2662/events
https://github.com/huggingface/datasets/pull/2662
946,470,815
MDExOlB1bGxSZXF1ZXN0NjkxNjM5MjU5
2,662
Load Dataset from the Hub (NO DATASET SCRIPT)
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This is ready for review now :)\r\n\r\nI would love to have some feedback on the changes in load.py @albertvillanova. There are many changes so if you have questions let me know, especially on the `resolve_data_files` functions and on the changes in `prepare_module`.\r\n\r\nAnd @thomwolf if you want to take a look at the documentation, feel free to share your suggestions :)", "I took your comments into account thanks !\r\nAnd I made `aiohttp` a required dependency :)", "Just updated the documentation :)\r\n[share_datasets.html](https://45532-250213286-gh.circle-artifacts.com/0/docs/_build/html/share_dataset.html)\r\n\r\nLet me know if you have some comments", "Merging this one :) \r\n\r\nWe can try to integrate the changes in the docs to #2718 @stevhliu !", "Baked this into the [docs](https://44335-250213286-gh.circle-artifacts.com/0/docs/_build/html/loading.html#hugging-face-hub) already, let me know if there is anything else I should add! :)" ]
1,626,456,118,000
1,629,903,181,000
1,629,901,088,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2662", "html_url": "https://github.com/huggingface/datasets/pull/2662", "diff_url": "https://github.com/huggingface/datasets/pull/2662.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2662.patch", "merged_at": 1629901088000 }
## Load the data from any Dataset repository on the Hub This PR adds support for loading datasets from any dataset repository on the hub, without requiring any dataset script. As a user it's now possible to create a repo and upload some csv/json/text/parquet files, and then be able to load the data in one line. Here is an example with the `allenai/c4` repository that contains a lot of compressed json lines files: ```python from datasets import load_dataset data_files = {"train": "en/c4-train.*.json.gz"} c4 = load_dataset("allenai/c4", data_files=data_files, split="train", streaming=True) print(c4.n_shards) # 1024 print(next(iter(c4))) # {'text': 'Beginners BBQ Class Takin...'} ``` By default it loads all the files, but as shown in the example you can choose the ones you want with unix style patterns. Of course it's still possible to use dataset scripts since they offer the most flexibility. ## Implementation details It uses `huggingface_hub` to list the files in a dataset repository. If you provide a path to a local directory instead of a repository name, it works the same way but it uses `glob`. Depending on the data files available, or passed in the `data_files` parameter, one of the available builders will be used among the csv, json, text and parquet builders. Because of this, it's not possible to load both csv and json files at once. In this case you have to load them separately and then concatenate the two datasets for example. ## TODO - [x] tests - [x] docs - [x] when huggingface_hub gets a new release, update the CI and the setup.py Close https://github.com/huggingface/datasets/issues/2629
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2662/reactions", "total_count": 5, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 5, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2662/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2661
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2661/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2661/comments
https://api.github.com/repos/huggingface/datasets/issues/2661/events
https://github.com/huggingface/datasets/pull/2661
946,446,967
MDExOlB1bGxSZXF1ZXN0NjkxNjE5MzAz
2,661
Add SD task for SUPERB
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I make a summary about our discussion with @lewtun and @Narsil on the agreed schema for this dataset and the additional steps required to generate the 2D array labels:\r\n- The labels for this dataset are a 2D array:\r\n Given an example:\r\n ```python\r\n {\"record_id\": record_id, \"file\": file, \"start\": start, \"end\": end, \"speakers\": [...]}\r\n ```\r\n the labels are a 2D array of shape `(num_frames, num_speakers)` where `num_frames = end - start` and `num_speakers = 2`.\r\n- In order to avoid a too large dataset (too large disk space), `datasets` does not store the 2D array label. Instead, we store a compact form:\r\n ```\r\n \"speakers\": [\r\n {\"speaker_id\": speaker_0_id, \"start\": start_0_speaker_0, \"end\": end_0_speaker_0},\r\n {\"speaker_id\": speaker_0_id, \"start\": start_1_speaker_0, \"end\": end_1_speaker_0},\r\n {\"speaker_id\": speaker_1_id, \"start\": start_0_speaker_1, \"end\": end_0_speaker_1},\r\n ],\r\n ```\r\n - Once loaded the dataset, an additional step is required to generate the 2D array label from this compact form\r\n - This additional step should be a modified version of the s3prl method `_get_labeled_speech`:\r\n - Original s3prl `_get_labeled_speech` includes 2 functionalities: reading the audio file and transforming it into an array, and generating the label 2D array; I think we should separate these 2 functionalities\r\n - Original s3prl `_get_labeled_speech` performs 2 steps to generate the labels:\r\n - Transform start/end seconds (float) into frame numbers (int): I have already done this step to generate the dataset\r\n - Generate the 2D array label from the frame numbers\r\n\r\nI also ping @osanseviero and @lhoestq to include them in the loop.", "Here I would like to discuss (and agree) one of the decisions I made, as I'm not completely satisfied with it: to transform the seconds (float) into frame numbers (int) to generate this dataset.\r\n\r\n- A priori, the most natural and general choice would be to preserve the seconds (float), because:\r\n - this is the way the raw data comes from\r\n - the transformation into frame numbers depends on the sample rate, frame_shift and subsampling\r\n\r\nHowever, I finally decided to transform seconds into frame numbers because:\r\n- for SUPERB, sampling rate, frame_shift and subsampling are fixed (`rate = 16_000`, `frame_shift = 160`, `subsampling = 1`)\r\n- it makes easier the post-processing, as labels are generated from sample numbers: labels are a 2D array of shape `(num_frames, num_speakers)`\r\n- the number of examples depends on the number of frames:\r\n - if an example has more than 2_000 frames, then it is split into 2 examples. This is the case for `record_id = \"7859-102521-0017_3983-5371-0014\"`, which has 2_452 frames and it is split into 2 examples:\r\n ```\r\n {\"record_id\": \"7859-102521-0017_3983-5371-0014\", \"start\"= 0, \"end\": 2_000,...},\r\n {\"record_id\": \"7859-102521-0017_3983-5371-0014\", \"start\"= 2_000, \"end\": 2_452,...},\r\n ```\r\n\r\nAs I told you, I'm not totally convinced of this decision, and I would really appreciate your opinion.\r\n\r\ncc: @lewtun @Narsil @osanseviero @lhoestq ", "It makes total sense to prepare the data to be in a format that can actually be used for model training and evaluation. That's one of the roles of this lib :)\r\n\r\nSo for me it's ok to use frames as a unit instead of seconds. Just pinging @patrickvonplaten in case he has ever played with such audio tasks and has some advice. For the context: the task is to classify which speaker is speaking, let us know if you are aware of any convenient/standard format for this.\r\n\r\nAlso I'm not sure why you have to split an example if it's longer that 2,000 frames ?", "> Also I'm not sure why you have to split an example if it's longer that 2,000 frames ?\r\n\r\nIt is a convention in SUPERB benchmark.", "Note that if we agree to leave the dataset as it is now, 2 additional custom functions must be used:\r\n- one to generate the 2D array labels\r\n- one to load the audio file into an array, but taking into account start/end to cut the audio\r\n\r\nIs there a way we can give these functions ready to be used? Or should we leave this entirely to the end user? This is not trivial...", "You could add an example of usage in the dataset card, as it is done for other audio datasets", "@albertvillanova this simple function can be edited simply to add the start/stop cuts \r\n\r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/automatic_speech_recognition.py#L29 ", "Does this function work on windows ?", "Windows ? What is it ? (Not sure not able to test, it's directly calling ffmpeg binary, so depending on the setup it could but can't say for sure without testing)\r\n", "It's one of the OS we're supposed to support :P (for the better and for the worse)", "> Note that if we agree to leave the dataset as it is now, 2 additional custom functions must be used:\r\n> \r\n> * one to generate the 2D array labels\r\n> * one to load the audio file into an array, but taking into account start/end to cut the audio\r\n> \r\n> Is there a way we can give these functions ready to be used? Or should we leave this entirely to the end user? This is not trivial...\r\n\r\n+1 on providing the necessary functions on the dataset card. aside from that, the current implementation looks great from my perspective!" ]
1,626,453,801,000
1,628,096,633,000
1,628,096,633,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2661", "html_url": "https://github.com/huggingface/datasets/pull/2661", "diff_url": "https://github.com/huggingface/datasets/pull/2661.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2661.patch", "merged_at": 1628096632000 }
Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization). TODO: - [x] Generate the LibriMix corpus - [x] Prepare the corpus for diarization - [x] Upload these files to the superb-data repo - [x] Transcribe the corresponding s3prl processing of these files into our superb loading script - [x] README: tags + description sections - ~~Add DER metric~~ (we leave the DER metric for a follow-up PR) Related to #2619. Close #2653. cc: @lewtun
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2661/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2661/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2660
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2660/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2660/comments
https://api.github.com/repos/huggingface/datasets/issues/2660/events
https://github.com/huggingface/datasets/pull/2660
946,316,180
MDExOlB1bGxSZXF1ZXN0NjkxNTA4NzE0
2,660
Move checks from _map_single to map
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq This one has been open for a while. Could you please take a look?", "@lhoestq Ready for the final review!", "I forgot to update the signature of `DatasetDict.map`, so did that now." ]
1,626,443,613,000
1,630,937,543,000
1,630,937,543,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2660", "html_url": "https://github.com/huggingface/datasets/pull/2660", "diff_url": "https://github.com/huggingface/datasets/pull/2660.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2660.patch", "merged_at": 1630937543000 }
The goal of this PR is to remove duplicated checks in the `map` logic to execute them only once whenever possible (`fn_kwargs`, `input_columns`, ...). Additionally, this PR improves the consistency (to align it with `input_columns`) of the `remove_columns` check by adding support for a single string value, which is then wrapped into a list.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2660/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2660/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2659
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2659/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2659/comments
https://api.github.com/repos/huggingface/datasets/issues/2659/events
https://github.com/huggingface/datasets/pull/2659
946,155,407
MDExOlB1bGxSZXF1ZXN0NjkxMzcwNzU3
2,659
Allow dataset config kwargs to be None
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,626,431,138,000
1,626,439,567,000
1,626,439,567,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2659", "html_url": "https://github.com/huggingface/datasets/pull/2659", "diff_url": "https://github.com/huggingface/datasets/pull/2659.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2659.patch", "merged_at": 1626439566000 }
Close https://github.com/huggingface/datasets/issues/2658 The dataset config kwargs that were set to None we simply ignored. This was an issue when None has some meaning for certain parameters of certain builders, like the `sep` parameter of the "csv" builder that allows to infer to separator. cc @SBrandeis
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2659/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2659/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2658
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2658/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2658/comments
https://api.github.com/repos/huggingface/datasets/issues/2658/events
https://github.com/huggingface/datasets/issues/2658
946,139,532
MDU6SXNzdWU5NDYxMzk1MzI=
2,658
Can't pass `sep=None` to load_dataset("csv", ...) to infer the separator via pandas.read_csv
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[]
1,626,429,944,000
1,626,439,566,000
1,626,439,566,000
MEMBER
null
null
null
When doing `load_dataset("csv", sep=None)`, the `sep` passed to `pd.read_csv` is still the default `sep=","` instead, which makes it impossible to make the csv loader infer the separator. Related to https://github.com/huggingface/datasets/pull/2656 cc @SBrandeis
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2658/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2658/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2657
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2657/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2657/comments
https://api.github.com/repos/huggingface/datasets/issues/2657/events
https://github.com/huggingface/datasets/issues/2657
945,822,829
MDU6SXNzdWU5NDU4MjI4Mjk=
2,657
`to_json` reporting enhancements
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,626,391,938,000
1,626,392,033,000
null
CONTRIBUTOR
null
null
null
While using `to_json` 2 things came to mind that would have made the experience easier on the user: 1. Could we have a `desc` arg for the tqdm use and a fallback to just `to_json` so that it'd be clear to the user what's happening? Surely, one can just print the description before calling json, but I thought perhaps it'd help to have it self-identify like you did for other progress bars recently. 2. It took me a while to make sense of the reported numbers: ``` 22%|██▏ | 1536/7076 [12:30:57<44:09:42, 28.70s/it] ``` So iteration here happens to be 10K samples, and the total is 70M records. But the user does't know that, so the progress bar is perfect, but the numbers it reports are meaningless until one discovers that 1it=10K samples. And one still has to convert these in the head - so it's not quick. Not exactly sure what's the best way to approach this, perhaps it can be part of `desc`? or report M or K, so it'd be built-in if it were to print, e.g.: ``` 22%|██▏ | 15360K/70760K [12:30:57<44:09:42, 28.70s/it] ``` or ``` 22%|██▏ | 15.36M/70.76M [12:30:57<44:09:42, 28.70s/it] ``` (while of course remaining friendly to small datasets) I forget if tqdm lets you add a magnitude identifier to the running count. Thank you!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2657/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2657/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2656
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2656/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2656/comments
https://api.github.com/repos/huggingface/datasets/issues/2656/events
https://github.com/huggingface/datasets/pull/2656
945,421,790
MDExOlB1bGxSZXF1ZXN0NjkwNzUzNjA3
2,656
Change `from_csv` default arguments
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This is not the default in pandas right ?\r\nWe try to align our CSV loader with the pandas API.\r\n\r\nMoreover according to their documentation, the python parser is used when sep is None, which might not be the fastest one.\r\n\r\nMaybe users could just specify `sep=None` themselves ?\r\nIn this case we should add some documentation about this" ]
1,626,358,146,000
1,626,431,006,000
1,626,431,006,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2656", "html_url": "https://github.com/huggingface/datasets/pull/2656", "diff_url": "https://github.com/huggingface/datasets/pull/2656.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2656.patch", "merged_at": null }
Passing `sep=None` to pandas's `read_csv` lets pandas guess the CSV file's separator This PR allows users to use this pandas's feature by passing `sep=None` to `Dataset.from_csv`: ```python Dataset.from_csv( ..., sep=None ) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2656/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2656/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2655
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2655/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2655/comments
https://api.github.com/repos/huggingface/datasets/issues/2655/events
https://github.com/huggingface/datasets/issues/2655
945,382,723
MDU6SXNzdWU5NDUzODI3MjM=
2,655
Allow the selection of multiple columns at once
{ "login": "Dref360", "id": 8976546, "node_id": "MDQ6VXNlcjg5NzY1NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dref360", "html_url": "https://github.com/Dref360", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "organizations_url": "https://api.github.com/users/Dref360/orgs", "repos_url": "https://api.github.com/users/Dref360/repos", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "received_events_url": "https://api.github.com/users/Dref360/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi! I was looking into this and hope you can clarify a point. Your my_dataset variable would be of type DatasetDict which means the alternative you've described (dict comprehension) is what makes sense. \r\nIs there a reason why you wouldn't want to convert my_dataset to a pandas df if you'd like to use it like one? Please let me know if I'm missing something.", "Hi! Sorry for the delay.\r\n\r\nIn this case, the dataset would be a `datasets.Dataset` and we want to select multiple columns, the `idx` and `label` columns for example.\r\n\r\nMy issue is that my dataset is too big for memory if I load everything into pandas." ]
1,626,355,845,000
1,627,054,857,000
null
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** Similar to pandas, it would be great if we could select multiple columns at once. **Describe the solution you'd like** ```python my_dataset = ... # Has columns ['idx', 'sentence', 'label'] idx, label = my_dataset[['idx', 'label']] ``` **Describe alternatives you've considered** we can do `[dataset[col] for col in ('idx', 'label')]` **Additional context** This is of course very minor.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2655/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2655/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2654
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2654/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2654/comments
https://api.github.com/repos/huggingface/datasets/issues/2654/events
https://github.com/huggingface/datasets/issues/2654
945,167,231
MDU6SXNzdWU5NDUxNjcyMzE=
2,654
Give a user feedback if the dataset he loads is streamable or not
{ "login": "philschmid", "id": 32632186, "node_id": "MDQ6VXNlcjMyNjMyMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philschmid", "html_url": "https://github.com/philschmid", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "organizations_url": "https://api.github.com/users/philschmid/orgs", "repos_url": "https://api.github.com/users/philschmid/repos", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "received_events_url": "https://api.github.com/users/philschmid/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "#self-assign", "I understand it already raises a `NotImplementedError` exception, eg:\r\n\r\n```\r\n>>> dataset = load_dataset(\"journalists_questions\", name=\"plain_text\", split=\"train\", streaming=True)\r\n\r\n[...]\r\nNotImplementedError: Extraction protocol for file at https://drive.google.com/uc?export=download&id=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U is not implemented yet\r\n```\r\n" ]
1,626,340,047,000
1,627,902,201,000
null
MEMBER
null
null
null
**Is your feature request related to a problem? Please describe.** I would love to know if a `dataset` is with the current implementation streamable or not. **Describe the solution you'd like** We could show a warning when a dataset is loaded with `load_dataset('...',streaming=True)` when its lot streamable, e.g. if it is an archive. **Describe alternatives you've considered** Add a new metadata tag for "streaming"
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2654/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2654/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2653
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2653/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2653/comments
https://api.github.com/repos/huggingface/datasets/issues/2653/events
https://github.com/huggingface/datasets/issues/2653
945,102,321
MDU6SXNzdWU5NDUxMDIzMjE=
2,653
Add SD task for SUPERB
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/7", "html_url": "https://github.com/huggingface/datasets/milestone/7", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/7/labels", "id": 6931350, "node_id": "MDk6TWlsZXN0b25lNjkzMTM1MA==", "number": 7, "title": "1.11", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 2, "state": "closed", "created_at": 1625809740000, "updated_at": 1630560843000, "due_on": 1627628400000, "closed_at": 1630560843000 }
[ "Note that this subset requires us to:\r\n\r\n* generate the LibriMix corpus from LibriSpeech\r\n* prepare the corpus for diarization\r\n\r\nAs suggested by @lhoestq we should perform these steps locally and add the prepared data to this public repo on the Hub: https://huggingface.co/datasets/superb/superb-data\r\n\r\nThen we can use the URLs for the files to load the data in `superb`'s dataset loading script.\r\n\r\nFor consistency, I suggest we name the folders in `superb-data` in the same way as the configs in the dataset loading script - e.g. use `sd` for speech diarization in both places :)", "@lewtun @lhoestq: \r\n\r\nI have already generated the LibriMix corpus and prepared the corpus for diarization. The output is 3 dirs (train, dev, test), each one containing 6 files: reco2dur rttm segments spk2utt utt2spk wav.scp\r\n\r\nNext steps:\r\n- Upload these files to the superb-data repo\r\n- Transcribe the corresponding s3prl processing of these files into our superb loading script\r\n\r\nNote that processing of these files is a bit more intricate than usual datasets: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/diarization/dataset.py#L233\r\n\r\n" ]
1,626,335,500,000
1,628,096,632,000
1,628,096,632,000
MEMBER
null
null
null
Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization). Steps: - [x] Generate the LibriMix corpus - [x] Prepare the corpus for diarization - [x] Upload these files to the superb-data repo - [x] Transcribe the corresponding s3prl processing of these files into our superb loading script - [ ] README: tags + description sections Related to #2619. cc: @lewtun
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2653/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2653/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2652
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2652/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2652/comments
https://api.github.com/repos/huggingface/datasets/issues/2652/events
https://github.com/huggingface/datasets/pull/2652
944,865,924
MDExOlB1bGxSZXF1ZXN0NjkwMjg0MTI4
2,652
Fix logging docstring
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,626,304,798,000
1,626,608,466,000
1,626,343,051,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2652", "html_url": "https://github.com/huggingface/datasets/pull/2652", "diff_url": "https://github.com/huggingface/datasets/pull/2652.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2652.patch", "merged_at": 1626343051000 }
Remove "no tqdm bars" from the docstring in the logging module to align it with the changes introduced in #2534.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2652/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2652/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2651
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2651/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2651/comments
https://api.github.com/repos/huggingface/datasets/issues/2651/events
https://github.com/huggingface/datasets/issues/2651
944,796,961
MDU6SXNzdWU5NDQ3OTY5NjE=
2,651
Setting log level higher than warning does not suppress progress bar
{ "login": "Isa-rentacs", "id": 1147443, "node_id": "MDQ6VXNlcjExNDc0NDM=", "avatar_url": "https://avatars.githubusercontent.com/u/1147443?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Isa-rentacs", "html_url": "https://github.com/Isa-rentacs", "followers_url": "https://api.github.com/users/Isa-rentacs/followers", "following_url": "https://api.github.com/users/Isa-rentacs/following{/other_user}", "gists_url": "https://api.github.com/users/Isa-rentacs/gists{/gist_id}", "starred_url": "https://api.github.com/users/Isa-rentacs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Isa-rentacs/subscriptions", "organizations_url": "https://api.github.com/users/Isa-rentacs/orgs", "repos_url": "https://api.github.com/users/Isa-rentacs/repos", "events_url": "https://api.github.com/users/Isa-rentacs/events{/privacy}", "received_events_url": "https://api.github.com/users/Isa-rentacs/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi,\r\n\r\nyou can suppress progress bars by patching logging as follows:\r\n```python\r\nimport datasets\r\nimport logging\r\ndatasets.logging.get_verbosity = lambda: logging.NOTSET\r\n# map call ...\r\n```", "Thank you, it worked :)", "See https://github.com/huggingface/datasets/issues/2528 for reference", "Note also that you can disable the progress bar with\r\n\r\n```python\r\nfrom datasets.utils import disable_progress_bar\r\ndisable_progress_bar()\r\n```\r\n\r\nSee https://github.com/huggingface/datasets/blob/8814b393984c1c2e1800ba370de2a9f7c8644908/src/datasets/utils/tqdm_utils.py#L84", "Now the library officially recommends `set_progress_bar_enabled(False)`\r\n\r\n```py\r\nfrom datasets.utils import set_progress_bar_enabled\r\n\r\nset_progress_bar_enabled(False)\r\n```\r\n\r\nsource:\r\n\r\nhttps://github.com/huggingface/datasets/blob/1fd47120ace13626c528367787ffa13e1a26e6c0/src/datasets/utils/tqdm_utils.py#L83-L88\r\n\r\n" ]
1,626,296,811,000
1,639,533,564,000
1,626,320,495,000
NONE
null
null
null
## Describe the bug I would like to disable progress bars for `.map` method (and other methods like `.filter` and `load_dataset` as well). According to #1627 one can suppress it by setting log level higher than `warning`, however doing so doesn't suppress it with version 1.9.0. I also tried to set `DATASETS_VERBOSITY` environment variable to `error` or `critical` but it also didn't work. ## Steps to reproduce the bug ```python import datasets from datasets.utils.logging import set_verbosity_error set_verbosity_error() def dummy_map(batch): return batch common_voice_train = datasets.load_dataset("common_voice", "de", split="train") common_voice_test = datasets.load_dataset("common_voice", "de", split="test") common_voice_train.map(dummy_map) ``` ## Expected results - The progress bar for `.map` call won't be shown ## Actual results - The progress bar for `.map` is still shown ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Linux-5.4.0-1045-aws-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.5 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2651/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2651/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2650
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2650/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2650/comments
https://api.github.com/repos/huggingface/datasets/issues/2650/events
https://github.com/huggingface/datasets/issues/2650
944,672,565
MDU6SXNzdWU5NDQ2NzI1NjU=
2,650
[load_dataset] shard and parallelize the process
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "I need the same feature for distributed training" ]
1,626,285,898,000
1,635,177,238,000
null
CONTRIBUTOR
null
null
null
- Some huge datasets take forever to build the first time. (e.g. oscar/en) as it's done in a single cpu core. - If the build crashes, everything done up to that point gets lost Request: Shard the build over multiple arrow files, which would enable: - much faster build by parallelizing the build process - if the process crashed, the completed arrow files don't need to be re-built again Thank you! @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2650/reactions", "total_count": 9, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 2 }
https://api.github.com/repos/huggingface/datasets/issues/2650/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2649
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2649/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2649/comments
https://api.github.com/repos/huggingface/datasets/issues/2649/events
https://github.com/huggingface/datasets/issues/2649
944,651,229
MDU6SXNzdWU5NDQ2NTEyMjk=
2,649
adding progress bar / ETA for `load_dataset`
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,626,284,079,000
1,626,284,280,000
null
CONTRIBUTOR
null
null
null
Please consider: ``` Downloading and preparing dataset oscar/unshuffled_deduplicated_en (download: 462.40 GiB, generated: 1.18 TiB, post-processed: Unknown size, total: 1.63 TiB) to cache/oscar/unshuffled_deduplicated_en/1.0.0/84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2... HF google storage unreachable. Downloading and preparing it from source ``` and no indication whatsoever of whether things work well or when it'll be done. It's important to have an estimated completion time for when doing slurm jobs since some instances have a cap on run-time. I think for this particular job it sat for 30min in total silence and then after 30min it started generating: ``` 897850 examples [07:24, 10286.71 examples/s] ``` which is already great! Request: 1. ETA - knowing how many hours to allocate for a slurm job 2. progress bar - helps to know things are working and aren't stuck and where we are at. Thank you! @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2649/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2649/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2648
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2648/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2648/comments
https://api.github.com/repos/huggingface/datasets/issues/2648/events
https://github.com/huggingface/datasets/issues/2648
944,484,522
MDU6SXNzdWU5NDQ0ODQ1MjI=
2,648
Add web_split dataset for Paraphase and Rephrase benchmark
{ "login": "bhadreshpsavani", "id": 26653468, "node_id": "MDQ6VXNlcjI2NjUzNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhadreshpsavani", "html_url": "https://github.com/bhadreshpsavani", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "bhadreshpsavani", "id": 26653468, "node_id": "MDQ6VXNlcjI2NjUzNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhadreshpsavani", "html_url": "https://github.com/bhadreshpsavani", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "type": "User", "site_admin": false }
[ { "login": "bhadreshpsavani", "id": 26653468, "node_id": "MDQ6VXNlcjI2NjUzNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhadreshpsavani", "html_url": "https://github.com/bhadreshpsavani", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "type": "User", "site_admin": false } ]
null
[ "#take" ]
1,626,272,676,000
1,626,272,772,000
null
CONTRIBUTOR
null
null
null
## Describe: For getting simple sentences from complex sentence there are dataset and task like wiki_split that is available in hugging face datasets. This web_split is a very similar dataset. There some research paper which states that by combining these two datasets we if we train the model it will yield better results on both tests data. This dataset is made from web NLG data. All the dataset related details are provided in the below repository Github link: https://github.com/shashiongithub/Split-and-Rephrase
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2648/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2648/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2647
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2647/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2647/comments
https://api.github.com/repos/huggingface/datasets/issues/2647/events
https://github.com/huggingface/datasets/pull/2647
944,424,941
MDExOlB1bGxSZXF1ZXN0Njg5OTExMzky
2,647
Fix anchor in README
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[]
1,626,268,964,000
1,626,608,478,000
1,626,331,847,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2647", "html_url": "https://github.com/huggingface/datasets/pull/2647", "diff_url": "https://github.com/huggingface/datasets/pull/2647.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2647.patch", "merged_at": 1626331847000 }
I forgot to push this fix in #2611, so I'm sending it now.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2647/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2647/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2646
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2646/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2646/comments
https://api.github.com/repos/huggingface/datasets/issues/2646/events
https://github.com/huggingface/datasets/issues/2646
944,379,954
MDU6SXNzdWU5NDQzNzk5NTQ=
2,646
downloading of yahoo_answers_topics dataset failed
{ "login": "vikrant7k", "id": 66781249, "node_id": "MDQ6VXNlcjY2NzgxMjQ5", "avatar_url": "https://avatars.githubusercontent.com/u/66781249?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vikrant7k", "html_url": "https://github.com/vikrant7k", "followers_url": "https://api.github.com/users/vikrant7k/followers", "following_url": "https://api.github.com/users/vikrant7k/following{/other_user}", "gists_url": "https://api.github.com/users/vikrant7k/gists{/gist_id}", "starred_url": "https://api.github.com/users/vikrant7k/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vikrant7k/subscriptions", "organizations_url": "https://api.github.com/users/vikrant7k/orgs", "repos_url": "https://api.github.com/users/vikrant7k/repos", "events_url": "https://api.github.com/users/vikrant7k/events{/privacy}", "received_events_url": "https://api.github.com/users/vikrant7k/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi ! I just tested and it worked fine today for me.\r\n\r\nI think this is because the dataset is stored on Google Drive which has a quota limit for the number of downloads per day, see this similar issue https://github.com/huggingface/datasets/issues/996 \r\n\r\nFeel free to try again today, now that the quota was reset" ]
1,626,265,865,000
1,626,340,516,000
null
NONE
null
null
null
## Describe the bug I get an error datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files when I try to download the yahoo_answers_topics dataset ## Steps to reproduce the bug self.dataset = load_dataset( 'yahoo_answers_topics', cache_dir=self.config['yahoo_cache_dir'], split='train[:90%]') # Sample code to reproduce the bug self.dataset = load_dataset( 'yahoo_answers_topics', cache_dir=self.config['yahoo_cache_dir'], split='train[:90%]') ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2646/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2646/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2645
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2645/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2645/comments
https://api.github.com/repos/huggingface/datasets/issues/2645/events
https://github.com/huggingface/datasets/issues/2645
944,374,284
MDU6SXNzdWU5NDQzNzQyODQ=
2,645
load_dataset processing failed with OS error after downloading a dataset
{ "login": "fake-warrior8", "id": 40395156, "node_id": "MDQ6VXNlcjQwMzk1MTU2", "avatar_url": "https://avatars.githubusercontent.com/u/40395156?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fake-warrior8", "html_url": "https://github.com/fake-warrior8", "followers_url": "https://api.github.com/users/fake-warrior8/followers", "following_url": "https://api.github.com/users/fake-warrior8/following{/other_user}", "gists_url": "https://api.github.com/users/fake-warrior8/gists{/gist_id}", "starred_url": "https://api.github.com/users/fake-warrior8/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fake-warrior8/subscriptions", "organizations_url": "https://api.github.com/users/fake-warrior8/orgs", "repos_url": "https://api.github.com/users/fake-warrior8/repos", "events_url": "https://api.github.com/users/fake-warrior8/events{/privacy}", "received_events_url": "https://api.github.com/users/fake-warrior8/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! It looks like an issue with pytorch.\r\n\r\nCould you try to run `import torch` and see if it raises an error ?", "> Hi ! It looks like an issue with pytorch.\r\n> \r\n> Could you try to run `import torch` and see if it raises an error ?\r\n\r\nIt works. Thank you!" ]
1,626,265,433,000
1,626,341,642,000
1,626,341,642,000
NONE
null
null
null
## Describe the bug After downloading a dataset like opus100, there is a bug that OSError: Cannot find data file. Original error: dlopen: cannot load any more object with static TLS ## Steps to reproduce the bug ```python from datasets import load_dataset this_dataset = load_dataset('opus100', 'af-en') ``` ## Expected results there is no error when running load_dataset. ## Actual results Specify the actual results or traceback. Traceback (most recent call last): File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 652, in _download_and_prep self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 989, in _prepare_split example = self.info.features.encode_example(record) File "/home/anaconda3/lib/python3.6/site-packages/datasets/features.py", line 952, in encode_example example = cast_to_python_objects(example) File "/home/anaconda3/lib/python3.6/site-packages/datasets/features.py", line 219, in cast_to_python_ob return _cast_to_python_objects(obj)[0] File "/home/anaconda3/lib/python3.6/site-packages/datasets/features.py", line 165, in _cast_to_python_o import torch File "/home/anaconda3/lib/python3.6/site-packages/torch/__init__.py", line 188, in <module> _load_global_deps() File "/home/anaconda3/lib/python3.6/site-packages/torch/__init__.py", line 141, in _load_global_deps ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL) File "/home/anaconda3/lib/python3.6/ctypes/__init__.py", line 348, in __init__ self._handle = _dlopen(self._name, mode) OSError: dlopen: cannot load any more object with static TLS During handling of the above exception, another exception occurred: Traceback (most recent call last): File "download_hub_opus100.py", line 9, in <module> this_dataset = load_dataset('opus100', language_pair) File "/home/anaconda3/lib/python3.6/site-packages/datasets/load.py", line 748, in load_dataset use_auth_token=use_auth_token, File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 575, in download_and_prepa dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 658, in _download_and_prep + str(e) OSError: Cannot find data file. Original error: dlopen: cannot load any more object with static TLS ## Environment info - `datasets` version: 1.8.0 - Platform: Linux-3.13.0-32-generic-x86_64-with-debian-jessie-sid - Python version: 3.6.6 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2645/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2645/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2644
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2644/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2644/comments
https://api.github.com/repos/huggingface/datasets/issues/2644/events
https://github.com/huggingface/datasets/issues/2644
944,254,748
MDU6SXNzdWU5NDQyNTQ3NDg=
2,644
Batched `map` not allowed to return 0 items
{ "login": "pcuenca", "id": 1177582, "node_id": "MDQ6VXNlcjExNzc1ODI=", "avatar_url": "https://avatars.githubusercontent.com/u/1177582?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pcuenca", "html_url": "https://github.com/pcuenca", "followers_url": "https://api.github.com/users/pcuenca/followers", "following_url": "https://api.github.com/users/pcuenca/following{/other_user}", "gists_url": "https://api.github.com/users/pcuenca/gists{/gist_id}", "starred_url": "https://api.github.com/users/pcuenca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pcuenca/subscriptions", "organizations_url": "https://api.github.com/users/pcuenca/orgs", "repos_url": "https://api.github.com/users/pcuenca/repos", "events_url": "https://api.github.com/users/pcuenca/events{/privacy}", "received_events_url": "https://api.github.com/users/pcuenca/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting. Indeed it looks like type inference makes it fail. We should probably just ignore this step until a non-empty batch is passed.", "Sounds good! Do you want me to propose a PR? I'm quite busy right now, but if it's not too urgent I could take a look next week.", "Sure if you're interested feel free to open a PR :)\r\n\r\nYou can also ping me anytime if you have questions or if I can help !", "Sorry to ping you, @lhoestq, did you have a chance to take a look at the proposed PR? Thank you!", "Yes and it's all good, thank you :)\r\n\r\nFeel free to close this issue if it's good for you", "Everything's good, thanks!" ]
1,626,256,699,000
1,627,311,315,000
1,627,311,315,000
CONTRIBUTOR
null
null
null
## Describe the bug I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingface.co/docs/datasets/processing.html#augmenting-the-dataset), `a batch mapped function can take as input a batch of size N and return a batch of size M where M can be greater or less than N and can even be zero`. However, when the returned batch has a size of zero (neither item in the batch fulfilled the condition), we get an `index out of bounds` error. I think that `arrow_writer.py` is [trying to infer the returned types using the first element returned](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_writer.py#L100), but no elements were returned in this case. For this error to happen, I'm returning a dictionary that contains empty lists for the keys I want to keep, see below. If I return an empty dictionary instead (no keys), then a different error eventually occurs. ## Steps to reproduce the bug ```python def select_rows(examples): # `key` is a column name that exists in the original dataset # The following line simulates no matches found, so we return an empty batch result = {'key': []} return result filtered_dataset = dataset.map( select_rows, remove_columns = dataset.column_names, batched = True, num_proc = 1, desc = "Selecting rows with images that exist" ) ``` The code above immediately triggers the exception. If we use the following instead: ```python def select_rows(examples): # `key` is a column name that exists in the original dataset result = {'key': []} # or defaultdict or whatever # code to check for condition and append elements to result # some_items_found will be set to True if there were any matching elements in the batch return result if some_items_found else {} ``` Then it _seems_ to work, but it eventually fails with some sort of schema error. I believe it may happen when an empty batch is followed by a non-empty one, but haven't set up a test to verify it. In my opinion, returning a dictionary with empty lists and valid column names should be accepted as a valid result with zero items. ## Expected results The dataset would be filtered and only the matching fields would be returned. ## Actual results An exception is encountered, as described. Using a workaround makes it fail further along the line. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.1.dev0 - Platform: Linux-5.4.0-53-generic-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2644/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2644/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2643
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2643/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2643/comments
https://api.github.com/repos/huggingface/datasets/issues/2643/events
https://github.com/huggingface/datasets/issues/2643
944,220,273
MDU6SXNzdWU5NDQyMjAyNzM=
2,643
Enum used in map functions will raise a RecursionError with dill.
{ "login": "jorgeecardona", "id": 100702, "node_id": "MDQ6VXNlcjEwMDcwMg==", "avatar_url": "https://avatars.githubusercontent.com/u/100702?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jorgeecardona", "html_url": "https://github.com/jorgeecardona", "followers_url": "https://api.github.com/users/jorgeecardona/followers", "following_url": "https://api.github.com/users/jorgeecardona/following{/other_user}", "gists_url": "https://api.github.com/users/jorgeecardona/gists{/gist_id}", "starred_url": "https://api.github.com/users/jorgeecardona/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jorgeecardona/subscriptions", "organizations_url": "https://api.github.com/users/jorgeecardona/orgs", "repos_url": "https://api.github.com/users/jorgeecardona/repos", "events_url": "https://api.github.com/users/jorgeecardona/events{/privacy}", "received_events_url": "https://api.github.com/users/jorgeecardona/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "I'm running into this as well. (Thank you so much for reporting @jorgeecardona — was staring at this massive stack trace and unsure what exactly was wrong!)", "Hi ! Thanks for reporting :)\r\n\r\nUntil this is fixed on `dill`'s side, we could implement a custom saving in our Pickler indefined in utils.py_utils.py\r\nThere is already a suggestion in this message about how to do it:\r\nhttps://github.com/uqfoundation/dill/issues/250#issuecomment-852566284\r\n\r\nLet me know if such a workaround could help, and feel free to open a PR if you want to contribute !", "I have the same bug.\r\nthe code is as follows:\r\n![image](https://user-images.githubusercontent.com/84262181/139785849-620dd4ac-86ce-4212-8163-942bbca305aa.png)\r\nthe error is: \r\n![image](https://user-images.githubusercontent.com/84262181/139785899-88a9bd75-c60b-45a5-b819-830c7c096f3d.png)\r\n\r\nLook for the solution for this bug.", "Hi ! I think your RecursionError comes from a different issue @BitcoinNLPer , could you open a separate issue please ?\r\n\r\nAlso which dataset are you using ? I tried loading `CodedotAI/code_clippy` but I get a different error\r\n```python\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/load.py\", line 1615, in load_dataset\r\n **config_kwargs,\r\n File \"/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/load.py\", line 1446, in load_dataset_builder\r\n builder_cls = import_main_class(dataset_module.module_path)\r\n File \"/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/load.py\", line 101, in import_main_class\r\n module = importlib.import_module(module_path)\r\n File \"/Users/quentinlhoest/.virtualenvs/hf-datasets/lib/python3.7/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 983, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 967, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 677, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"/Users/quentinlhoest/.cache/huggingface/modules/datasets_modules/datasets/CodedotAI___code_clippy/d332f69d036e8c80f47bc9a96d676c3fa30cb50af7bb81e2d4d12e80b83efc4d/code_clippy.py\", line 66, in <module>\r\n url_elements = results.find_all(\"a\")\r\nAttributeError: 'NoneType' object has no attribute 'find_all'\r\n```" ]
1,626,254,168,000
1,635,846,671,000
null
NONE
null
null
null
## Describe the bug Enums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284 In my particular case, I use an enum to define an argument with fixed options using the `TraininigArguments` dataclass as base class and the `HfArgumentParser`. In the same file I use a `ds.map` that tries to pickle the content of the module including the definition of the enum that runs into the dill bug described above. ## Steps to reproduce the bug ```python from datasets import load_dataset from enum import Enum class A(Enum): a = 'a' def main(): a = A.a def f(x): return {} if a == a.a else x ds = load_dataset('cnn_dailymail', '3.0.0')['test'] ds = ds.map(f, num_proc=15) if __name__ == "__main__": main() ``` ## Expected results The known problem with dill could be prevented as explained in the link above (workaround.) Since `HFArgumentParser` nicely uses the enum class for choices it makes sense to also deal with this bug under the hood. ## Actual results ```python File "/home/xxxx/miniconda3/lib/python3.8/site-packages/dill/_dill.py", line 1373, in save_type pickler.save_reduce(_create_type, (type(obj), obj.__name__, File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 690, in save_reduce save(args) File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 899, in save_tuple save(element) File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 534, in save self.framer.commit_frame() File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 220, in commit_frame if f.tell() >= self._FRAME_SIZE_TARGET or force: RecursionError: maximum recursion depth exceeded while calling a Python object ``` ## Environment info - `datasets` version: 1.8.0 - Platform: Linux-5.9.0-4-amd64-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2643/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2643/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2642
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2642/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2642/comments
https://api.github.com/repos/huggingface/datasets/issues/2642/events
https://github.com/huggingface/datasets/issues/2642
944,175,697
MDU6SXNzdWU5NDQxNzU2OTc=
2,642
Support multi-worker with streaming dataset (IterableDataset).
{ "login": "cccntu", "id": 31893406, "node_id": "MDQ6VXNlcjMxODkzNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cccntu", "html_url": "https://github.com/cccntu", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "organizations_url": "https://api.github.com/users/cccntu/orgs", "repos_url": "https://api.github.com/users/cccntu/repos", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "received_events_url": "https://api.github.com/users/cccntu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi ! This is a great idea :)\r\nI think we could have something similar to what we have in `datasets.Dataset.map`, i.e. a `num_proc` parameter that tells how many processes to spawn to parallelize the data processing. \r\n\r\nRegarding AUTOTUNE, this could be a nice feature as well, we could see how to add it in a second step" ]
1,626,250,978,000
1,626,341,854,000
null
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** The current `.map` does not support multi-process, CPU can become bottleneck if the pre-processing is complex (e.g. t5 span masking). **Describe the solution you'd like** Ideally `.map` should support multi-worker like tfds, with `AUTOTUNE`. **Describe alternatives you've considered** A simpler solution is to shard the dataset and process it in parallel with pytorch dataloader. The shard does not need to be of equal size. * https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset **Additional context**
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2642/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2642/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2641
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2641/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2641/comments
https://api.github.com/repos/huggingface/datasets/issues/2641/events
https://github.com/huggingface/datasets/issues/2641
943,838,085
MDU6SXNzdWU5NDM4MzgwODU=
2,641
load_dataset("financial_phrasebank") NonMatchingChecksumError
{ "login": "courtmckay", "id": 13956255, "node_id": "MDQ6VXNlcjEzOTU2MjU1", "avatar_url": "https://avatars.githubusercontent.com/u/13956255?v=4", "gravatar_id": "", "url": "https://api.github.com/users/courtmckay", "html_url": "https://github.com/courtmckay", "followers_url": "https://api.github.com/users/courtmckay/followers", "following_url": "https://api.github.com/users/courtmckay/following{/other_user}", "gists_url": "https://api.github.com/users/courtmckay/gists{/gist_id}", "starred_url": "https://api.github.com/users/courtmckay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/courtmckay/subscriptions", "organizations_url": "https://api.github.com/users/courtmckay/orgs", "repos_url": "https://api.github.com/users/courtmckay/repos", "events_url": "https://api.github.com/users/courtmckay/events{/privacy}", "received_events_url": "https://api.github.com/users/courtmckay/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi! It's probably because this dataset is stored on google drive and it has a per day quota limit. It should work if you retry, I was able to initiate the download.\r\n\r\nSimilar issue [here](https://github.com/huggingface/datasets/issues/2646)", "Hi ! Loading the dataset works on my side as well.\r\nFeel free to try again and let us know if it works for you know", "Thank you! I've been trying periodically for the past month, and no luck yet with this particular dataset. Just tried again and still hitting the checksum error.\r\n\r\nCode:\r\n\r\n`dataset = load_dataset(\"financial_phrasebank\", \"sentences_allagree\") `\r\n\r\nTraceback:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nNonMatchingChecksumError Traceback (most recent call last)\r\n<ipython-input-2-55cc2144f31e> in <module>\r\n----> 1 dataset = load_dataset(\"financial_phrasebank\", \"sentences_allagree\")\r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)\r\n 859 ignore_verifications=ignore_verifications,\r\n 860 try_from_hf_gcs=try_from_hf_gcs,\r\n--> 861 use_auth_token=use_auth_token,\r\n 862 )\r\n 863 \r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 582 if not downloaded_from_gcs:\r\n 583 self._download_and_prepare(\r\n--> 584 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 585 )\r\n 586 # Sync info\r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 642 if verify_infos:\r\n 643 verify_checksums(\r\n--> 644 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n 645 )\r\n 646 \r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)\r\n 38 if len(bad_urls) > 0:\r\n 39 error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n 41 logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n 42 \r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip']\r\n```" ]
1,626,211,309,000
1,626,701,170,000
null
NONE
null
null
null
## Describe the bug Attempting to download the financial_phrasebank dataset results in a NonMatchingChecksumError ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("financial_phrasebank", 'sentences_allagree') ``` ## Expected results I expect to see the financial_phrasebank dataset downloaded successfully ## Actual results NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip'] ## Environment info - `datasets` version: 1.9.0 - Platform: Linux-4.14.232-177.418.amzn2.x86_64-x86_64-with-debian-10.6 - Python version: 3.7.10 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2641/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2641/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2640
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2640/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2640/comments
https://api.github.com/repos/huggingface/datasets/issues/2640/events
https://github.com/huggingface/datasets/pull/2640
943,591,055
MDExOlB1bGxSZXF1ZXN0Njg5MjAxMDkw
2,640
Fix docstrings
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[]
1,626,192,554,000
1,626,331,861,000
1,626,329,172,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2640", "html_url": "https://github.com/huggingface/datasets/pull/2640", "diff_url": "https://github.com/huggingface/datasets/pull/2640.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2640.patch", "merged_at": 1626329172000 }
Fix rendering of some docstrings.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2640/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2640/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2639
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2639/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2639/comments
https://api.github.com/repos/huggingface/datasets/issues/2639/events
https://github.com/huggingface/datasets/pull/2639
943,527,463
MDExOlB1bGxSZXF1ZXN0Njg5MTQ3NDE5
2,639
Refactor patching to specific submodule
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,626,188,925,000
1,626,195,169,000
1,626,195,169,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2639", "html_url": "https://github.com/huggingface/datasets/pull/2639", "diff_url": "https://github.com/huggingface/datasets/pull/2639.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2639.patch", "merged_at": 1626195168000 }
Minor reorganization of the code, so that additional patching functions (not related to streaming) might be created. In relation with the initial approach followed in #2631.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2639/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2639/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2638
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2638/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2638/comments
https://api.github.com/repos/huggingface/datasets/issues/2638/events
https://github.com/huggingface/datasets/pull/2638
943,484,913
MDExOlB1bGxSZXF1ZXN0Njg5MTA5NTg1
2,638
Streaming for the Json loader
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "A note is that I think we should add a few indicator of status (as mentioned by @stas00 in #2649), probably at the (1) downloading, (2) extracting and (3) reading steps. In particular when loading many very large files it's interesting to know a bit where we are in the process.", "I tested locally, and the builtin `json` loader is 4x slower than `pyarrow.json`. Thanks for the comment @albertvillanova !\r\n\r\nTherefore I switched back to using `pyarrow.json`, but only on the batch that is read. This way we don't have to deal with its `block_size`, and it only loads in memory one batch at a time." ]
1,626,187,026,000
1,626,451,172,000
1,626,451,171,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2638", "html_url": "https://github.com/huggingface/datasets/pull/2638", "diff_url": "https://github.com/huggingface/datasets/pull/2638.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2638.patch", "merged_at": 1626451171000 }
It was not using `open` in the builder. Therefore `pyarrow.json.read_json` was downloading the full file to start yielding rows. Moreover, it appeared that `pyarrow.json.read_json` was not really suited for streaming as it was downloading too much data and failing if `block_size` was not properly configured (related to #2573). So I switched to using `open` which is extended to support reading from remote file progressively, and I removed the pyarrow json reader which was not practical. Instead, I'm using the classical `json.loads` from the standard library.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2638/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2638/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2637
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2637/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2637/comments
https://api.github.com/repos/huggingface/datasets/issues/2637/events
https://github.com/huggingface/datasets/issues/2637
943,290,736
MDU6SXNzdWU5NDMyOTA3MzY=
2,637
Add the CIDEr metric?
{ "login": "zuujhyt", "id": 75845952, "node_id": "MDQ6VXNlcjc1ODQ1OTUy", "avatar_url": "https://avatars.githubusercontent.com/u/75845952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zuujhyt", "html_url": "https://github.com/zuujhyt", "followers_url": "https://api.github.com/users/zuujhyt/followers", "following_url": "https://api.github.com/users/zuujhyt/following{/other_user}", "gists_url": "https://api.github.com/users/zuujhyt/gists{/gist_id}", "starred_url": "https://api.github.com/users/zuujhyt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zuujhyt/subscriptions", "organizations_url": "https://api.github.com/users/zuujhyt/orgs", "repos_url": "https://api.github.com/users/zuujhyt/repos", "events_url": "https://api.github.com/users/zuujhyt/events{/privacy}", "received_events_url": "https://api.github.com/users/zuujhyt/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "same" ]
1,626,178,971,000
1,632,729,425,000
null
NONE
null
null
null
Hi, I find the api in https://huggingface.co/metrics quite useful. I am playing around with video/image captioning task, where CIDEr is a popular metric. Do you plan to add this into the HF ```datasets``` library? Thanks.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2637/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2637/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2636
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2636/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2636/comments
https://api.github.com/repos/huggingface/datasets/issues/2636/events
https://github.com/huggingface/datasets/pull/2636
943,044,514
MDExOlB1bGxSZXF1ZXN0Njg4NzEyMTY4
2,636
Streaming for the Pandas loader
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,626,167,901,000
1,626,187,044,000
1,626,187,043,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2636", "html_url": "https://github.com/huggingface/datasets/pull/2636", "diff_url": "https://github.com/huggingface/datasets/pull/2636.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2636.patch", "merged_at": 1626187043000 }
It was not using open in the builder. Therefore pd.read_pickle could fail when streaming from a private repo for example. Indeed, when streaming, open is extended to support reading from remote files and handles authentication to the HF Hub
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2636/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2636/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2635
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2635/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2635/comments
https://api.github.com/repos/huggingface/datasets/issues/2635/events
https://github.com/huggingface/datasets/pull/2635
943,030,999
MDExOlB1bGxSZXF1ZXN0Njg4Njk5OTM5
2,635
Streaming for the CSV loader
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,626,167,338,000
1,626,189,578,000
1,626,189,577,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2635", "html_url": "https://github.com/huggingface/datasets/pull/2635", "diff_url": "https://github.com/huggingface/datasets/pull/2635.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2635.patch", "merged_at": 1626189577000 }
It was not using `open` in the builder. Therefore `pd.read_csv` was downloading the full file to start yielding rows. Indeed, when streaming, `open` is extended to support reading from remote file progressively.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2635/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2635/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2634
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2634/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2634/comments
https://api.github.com/repos/huggingface/datasets/issues/2634/events
https://github.com/huggingface/datasets/pull/2634
942,805,621
MDExOlB1bGxSZXF1ZXN0Njg4NDk2Mzc2
2,634
Inject ASR template for lj_speech dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[]
1,626,156,294,000
1,626,167,109,000
1,626,167,109,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2634", "html_url": "https://github.com/huggingface/datasets/pull/2634", "diff_url": "https://github.com/huggingface/datasets/pull/2634.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2634.patch", "merged_at": 1626167109000 }
Related to: #2565, #2633. cc: @lewtun
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2634/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2634/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2633
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2633/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2633/comments
https://api.github.com/repos/huggingface/datasets/issues/2633/events
https://github.com/huggingface/datasets/pull/2633
942,396,414
MDExOlB1bGxSZXF1ZXN0Njg4MTMwOTA5
2,633
Update ASR tags
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[]
1,626,119,911,000
1,626,155,126,000
1,626,155,113,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2633", "html_url": "https://github.com/huggingface/datasets/pull/2633", "diff_url": "https://github.com/huggingface/datasets/pull/2633.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2633.patch", "merged_at": 1626155113000 }
This PR updates the ASR tags of the 5 datasets added in #2565 following the change of task categories in #2620
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2633/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2633/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2632
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2632/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2632/comments
https://api.github.com/repos/huggingface/datasets/issues/2632/events
https://github.com/huggingface/datasets/pull/2632
942,293,727
MDExOlB1bGxSZXF1ZXN0Njg4MDQyMjcw
2,632
add image-classification task template
{ "login": "nateraw", "id": 32437151, "node_id": "MDQ6VXNlcjMyNDM3MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nateraw", "html_url": "https://github.com/nateraw", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "organizations_url": "https://api.github.com/users/nateraw/orgs", "repos_url": "https://api.github.com/users/nateraw/repos", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "received_events_url": "https://api.github.com/users/nateraw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Awesome!", "Thanks for adding a new task template - great work @nateraw 🚀 !" ]
1,626,111,663,000
1,626,191,068,000
1,626,190,096,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2632", "html_url": "https://github.com/huggingface/datasets/pull/2632", "diff_url": "https://github.com/huggingface/datasets/pull/2632.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2632.patch", "merged_at": 1626190095000 }
Snippet below is the tl;dr, but you can try it out directly here: [![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/nateraw/005c025d41f0e48ae3d4ee61c0f20b70/image-classification-task-template-demo.ipynb) ```python from datasets import load_dataset ds = load_dataset('nateraw/image-folder', data_files='PetImages/') # DatasetDict({ # train: Dataset({ # features: ['file', 'labels'], # num_rows: 23410 # }) # }) ds = ds.prepare_for_task('image-classification') # DatasetDict({ # train: Dataset({ # features: ['image_file_path', 'labels'], # num_rows: 23410 # }) # }) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2632/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2632/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2631
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2631/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2631/comments
https://api.github.com/repos/huggingface/datasets/issues/2631/events
https://github.com/huggingface/datasets/pull/2631
942,242,271
MDExOlB1bGxSZXF1ZXN0Njg3OTk3MzM2
2,631
Delete extracted files when loading dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Sure @stas00, it is still a draft pull request. :)", "Yes, I noticed it after reviewing - my apologies.", "The problem with this approach is that it also deletes the downloaded files (if they need not be extracted). 😟 ", "> The problem with this approach is that it also deletes the downloaded files (if they need not be extracted). worried\r\n\r\nRight! These probably should not be deleted by default, but having an option for those users who are tight on disc space?", "> Right! These probably should not be deleted by default, but having an option for those users who are tight on disc space?\r\n\r\nI propose leaving that for another PR, and leave this one handling only with \"extracted\" files. Is it OK for you? :) ", "Awesome thanks !\r\nI just have one question: what about image/audio datasets for which we store the path to the extracted file on the arrow data ?\r\nIn this case the default should be to keep the extracted files.\r\n\r\nSo for now I would just make `keep_extracted=True` by default until we have a way to separate extracted files that can be deleted and extracted files that are actual resources of the dataset.", "@lhoestq, current implementation only deletes extracted \"files\", not extracted \"directories\", as it uses: `os.remove(path)`. I'm going to add a filter on files, so that this line does not throw an exception when passed a directory.\r\n\r\nFor audio datasets, the audio files are inside the extracted \"directory\", so they are not deleted.", "I'm still more in favor of having `keep_extracted=True` by default:\r\n- When working with a dataset, you call `load_dataset` many times. By default we want to keep objects extracted to not extract them over and over again (it can take a long time). Then once you know what you're doing and you want to optimize disk space, you can do `keep_extracted=False`. Deleting the extracted files by default is a regression that can lead to slow downs for people calling `load_dataset` many times, which is common when experimenting\r\n- This behavior doesn't sound natural as a default behavior. In the rest of the library, things are cached and not removed unless you explicitly say do (`map` caching for example). Moreover the function in the download manager is called `download_and_extract`, not `download_and_extract_and_remove_extracted_files`\r\n\r\nLet me know what you think !", "I think the main issue is that after doing some work users typically move on to other datasets and the amount of disc space used keeps on growing. So your logic is very sound and perhaps what's really needed is a cleansweep function that can go through **all** datasets and clean them up to the desired degree:\r\n\r\n- delete all extracted files\r\n- delete all sources\r\n- delete all caches\r\n- delete all caches that haven't been accessed in 6 months\r\n- delete completely old datasets that haven't been accessed in 6 months\r\n- more?\r\n\r\nSo a user can launch a little application, choose what they want to clean up and voila they have just freed up a huge amount of disc space. Makes me think of Ubuntu Tweak's Janitor app - very useful.\r\n\r\nAt the moment, this process of linting is very daunting and error-prone, especially due to all those dirs/files with hash names.", "@stas00 I've had the same idea. Instead of the full-fledged app, a simpler approach would be to add a new command to the CLI.", "oh, CLI would be perfect. I didn't mean to request a GUI-one specifically, was just using it as an example.\r\n\r\nOne could even do a crontab to delete old datasets that haven't been accesses in X months.", "@lhoestq I totally agree with you. I'm addressing that change.\r\n\r\n@stas00, @mariosasko, that could eventually be addressed in another pull request. The objective of this PR is:\r\n- add an option to pass to `load_dataset`, so that extracted files are deleted\r\n- do this deletion file per file, once the file has been already used to generate the cache Arrow file", "I also like the idea of having a CLI tool to help users clean their cache and save disk space, good idea !" ]
1,626,107,973,000
1,626,685,699,000
1,626,685,699,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2631", "html_url": "https://github.com/huggingface/datasets/pull/2631", "diff_url": "https://github.com/huggingface/datasets/pull/2631.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2631.patch", "merged_at": 1626685698000 }
Close #2481, close #2604, close #2591. cc: @stas00, @thomwolf, @BirgerMoell
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2631/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2631/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2630
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2630/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2630/comments
https://api.github.com/repos/huggingface/datasets/issues/2630/events
https://github.com/huggingface/datasets/issues/2630
942,102,956
MDU6SXNzdWU5NDIxMDI5NTY=
2,630
Progress bars are not properly rendered in Jupyter notebook
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "To add my experience when trying to debug this issue:\r\n\r\nSeems like previously the workaround given [here](https://github.com/tqdm/tqdm/issues/485#issuecomment-473338308) worked around this issue. But with the latest version of jupyter/tqdm I still get terminal warnings that IPython tried to send a message from a forked process.", "Hi @mludv, thanks for the hint!!! :) \r\n\r\nWe will definitely take it into account to try to fix this issue... It seems somehow related to `multiprocessing` and `tqdm`..." ]
1,626,098,833,000
1,626,160,832,000
null
MEMBER
null
null
null
## Describe the bug The progress bars are not Jupyter widgets; regular progress bars appear (like in a terminal). ## Steps to reproduce the bug ```python ds.map(tokenize, num_proc=10) ``` ## Expected results Jupyter widgets displaying the progress bars. ## Actual results Simple plane progress bars. cc: Reported by @thomwolf
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2630/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2630/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2629
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2629/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2629/comments
https://api.github.com/repos/huggingface/datasets/issues/2629/events
https://github.com/huggingface/datasets/issues/2629
941,819,205
MDU6SXNzdWU5NDE4MTkyMDU=
2,629
Load datasets from the Hub without requiring a dataset script
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "This is so cool, let us know if we can help with anything on the hub side (@Pierrci @elishowk) 🎉 " ]
1,626,079,517,000
1,629,901,088,000
1,629,901,088,000
MEMBER
null
null
null
As a user I would like to be able to upload my csv/json/text/parquet/etc. files in a dataset repository on the Hugging Face Hub and be able to load this dataset with `load_dataset` without having to implement a dataset script. Moreover I would like to be able to specify which file goes into which split using the `data_files` argument. This feature should be compatible with private repositories and dataset streaming. This can be implemented by checking the extension of the files in the dataset repository and then by using the right dataset builder that is already packaged in the library (csv/json/text/parquet/etc.)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2629/reactions", "total_count": 11, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 7, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2629/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2628
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2628/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2628/comments
https://api.github.com/repos/huggingface/datasets/issues/2628/events
https://github.com/huggingface/datasets/pull/2628
941,676,404
MDExOlB1bGxSZXF1ZXN0Njg3NTE0NzQz
2,628
Use ETag of remote data files
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[]
1,626,066,610,000
1,626,098,914,000
1,626,079,207,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2628", "html_url": "https://github.com/huggingface/datasets/pull/2628", "diff_url": "https://github.com/huggingface/datasets/pull/2628.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2628.patch", "merged_at": 1626079207000 }
Use ETag of remote data files to create config ID. Related to #2616.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2628/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2628/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2627
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2627/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2627/comments
https://api.github.com/repos/huggingface/datasets/issues/2627/events
https://github.com/huggingface/datasets/pull/2627
941,503,349
MDExOlB1bGxSZXF1ZXN0Njg3MzczMDg1
2,627
Minor fix tests with Windows paths
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[]
1,626,026,148,000
1,626,098,927,000
1,626,078,890,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2627", "html_url": "https://github.com/huggingface/datasets/pull/2627", "diff_url": "https://github.com/huggingface/datasets/pull/2627.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2627.patch", "merged_at": 1626078890000 }
Minor fix tests with Windows paths.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2627/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2627/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2626
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2626/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2626/comments
https://api.github.com/repos/huggingface/datasets/issues/2626/events
https://github.com/huggingface/datasets/pull/2626
941,497,830
MDExOlB1bGxSZXF1ZXN0Njg3MzY4OTMz
2,626
Use correct logger in metrics.py
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[]
1,626,024,150,000
1,626,098,934,000
1,626,069,269,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2626", "html_url": "https://github.com/huggingface/datasets/pull/2626", "diff_url": "https://github.com/huggingface/datasets/pull/2626.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2626.patch", "merged_at": 1626069269000 }
Fixes #2624
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2626/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2626/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2625
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2625/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2625/comments
https://api.github.com/repos/huggingface/datasets/issues/2625/events
https://github.com/huggingface/datasets/issues/2625
941,439,922
MDU6SXNzdWU5NDE0Mzk5MjI=
2,625
⚛️😇⚙️🔑
{ "login": "hustlen0mics", "id": 50596661, "node_id": "MDQ6VXNlcjUwNTk2NjYx", "avatar_url": "https://avatars.githubusercontent.com/u/50596661?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hustlen0mics", "html_url": "https://github.com/hustlen0mics", "followers_url": "https://api.github.com/users/hustlen0mics/followers", "following_url": "https://api.github.com/users/hustlen0mics/following{/other_user}", "gists_url": "https://api.github.com/users/hustlen0mics/gists{/gist_id}", "starred_url": "https://api.github.com/users/hustlen0mics/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hustlen0mics/subscriptions", "organizations_url": "https://api.github.com/users/hustlen0mics/orgs", "repos_url": "https://api.github.com/users/hustlen0mics/repos", "events_url": "https://api.github.com/users/hustlen0mics/events{/privacy}", "received_events_url": "https://api.github.com/users/hustlen0mics/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,626,005,674,000
1,626,069,359,000
1,626,069,359,000
NONE
null
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2625/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2625/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2624
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2624/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2624/comments
https://api.github.com/repos/huggingface/datasets/issues/2624/events
https://github.com/huggingface/datasets/issues/2624
941,318,247
MDU6SXNzdWU5NDEzMTgyNDc=
2,624
can't set verbosity for `metric.py`
{ "login": "thomas-happify", "id": 66082334, "node_id": "MDQ6VXNlcjY2MDgyMzM0", "avatar_url": "https://avatars.githubusercontent.com/u/66082334?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomas-happify", "html_url": "https://github.com/thomas-happify", "followers_url": "https://api.github.com/users/thomas-happify/followers", "following_url": "https://api.github.com/users/thomas-happify/following{/other_user}", "gists_url": "https://api.github.com/users/thomas-happify/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomas-happify/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomas-happify/subscriptions", "organizations_url": "https://api.github.com/users/thomas-happify/orgs", "repos_url": "https://api.github.com/users/thomas-happify/repos", "events_url": "https://api.github.com/users/thomas-happify/events{/privacy}", "received_events_url": "https://api.github.com/users/thomas-happify/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Thanks @thomas-happify for reporting and thanks @mariosasko for the fix." ]
1,625,948,625,000
1,626,069,269,000
1,626,069,269,000
NONE
null
null
null
## Describe the bug ``` [2021-07-10 20:13:11,528][datasets.utils.filelock][INFO] - Lock 139705371374976 acquired on /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow.lock [2021-07-10 20:13:11,529][datasets.arrow_writer][INFO] - Done writing 32 examples in 6100 bytes /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow. [2021-07-10 20:13:11,531][datasets.arrow_dataset][INFO] - Set __getitem__(key) output type to python objects for no columns (when key is int or slice) and don't output other (un-formatted) columns. [2021-07-10 20:13:11,543][/conda/envs/myenv/lib/python3.8/site-packages/datasets/metric.py][INFO] - Removing /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow ``` As you can see, `datasets` logging come from different places. `filelock`, `arrow_writer` & `arrow_dataset` comes from `datasets.*` which are expected However, `metric.py` logging comes from `/conda/envs/myenv/lib/python3.8/site-packages/datasets/` So when setting `datasets.utils.logging.set_verbosity_error()`, it still logs the last message which is annoying during evaluation. I had to do ``` logging.getLogger("/conda/envs/myenv/lib/python3.8/site-packages/datasets/metric").setLevel(logging.ERROR) ``` to fully mute these messages ## Expected results it shouldn't log these messages when setting `datasets.utils.logging.set_verbosity_error()` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: tried both 1.8.0 & 1.9.0 - Platform: Ubuntu 18.04.5 LTS - Python version: 3.8.10 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2624/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2624/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2623
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2623/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2623/comments
https://api.github.com/repos/huggingface/datasets/issues/2623/events
https://github.com/huggingface/datasets/pull/2623
941,265,342
MDExOlB1bGxSZXF1ZXN0Njg3MTk0MjM3
2,623
[Metrics] added wiki_split metrics
{ "login": "bhadreshpsavani", "id": 26653468, "node_id": "MDQ6VXNlcjI2NjUzNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhadreshpsavani", "html_url": "https://github.com/bhadreshpsavani", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
null
[ "Looks all good to me thanks :)\r\nJust did some minor corrections in the docstring" ]
1,625,928,710,000
1,626,272,893,000
1,626,129,271,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2623", "html_url": "https://github.com/huggingface/datasets/pull/2623", "diff_url": "https://github.com/huggingface/datasets/pull/2623.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2623.patch", "merged_at": 1626129271000 }
Fixes: #2606 This pull request adds combine metrics for the wikisplit or English sentence split task Reviewer: @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2623/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2623/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2622
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2622/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2622/comments
https://api.github.com/repos/huggingface/datasets/issues/2622/events
https://github.com/huggingface/datasets/issues/2622
941,127,785
MDU6SXNzdWU5NDExMjc3ODU=
2,622
Integration with AugLy
{ "login": "Darktex", "id": 890615, "node_id": "MDQ6VXNlcjg5MDYxNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/890615?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Darktex", "html_url": "https://github.com/Darktex", "followers_url": "https://api.github.com/users/Darktex/followers", "following_url": "https://api.github.com/users/Darktex/following{/other_user}", "gists_url": "https://api.github.com/users/Darktex/gists{/gist_id}", "starred_url": "https://api.github.com/users/Darktex/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Darktex/subscriptions", "organizations_url": "https://api.github.com/users/Darktex/orgs", "repos_url": "https://api.github.com/users/Darktex/repos", "events_url": "https://api.github.com/users/Darktex/events{/privacy}", "received_events_url": "https://api.github.com/users/Darktex/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi,\r\n\r\nyou can define your own custom formatting with `Dataset.set_transform()` and then run the tokenizer with the batches of augmented data as follows:\r\n```python\r\ndset = load_dataset(\"imdb\", split=\"train\") # Let's say we are working with the IMDB dataset\r\ndset.set_transform(lambda ex: {\"text\": augly_text_augmentation(ex[\"text\"])}, columns=\"text\", output_all_columns=True)\r\ndataloader = torch.utils.data.DataLoader(dset, batch_size=32)\r\nfor epoch in range(5):\r\n for batch in dataloader:\r\n tokenizer_output = tokenizer(batch.pop(\"text\"), padding=True, truncation=True, return_tensors=\"pt\")\r\n batch.update(tokenizer_output)\r\n output = model(**batch)\r\n ...\r\n```" ]
1,625,875,389,000
1,626,023,291,000
null
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** Facebook recently launched a library, [AugLy](https://github.com/facebookresearch/AugLy) , that has a unified API for augmentations for image, video and text. It would be pretty exciting to have it hooked up to HF libraries so that we can make NLP models robust to misspellings or to punctuation, or emojis etc. Plus, with Transformers supporting more CV use cases, having augmentations support becomes crucial. **Describe the solution you'd like** The biggest difference between augmentations and preprocessing is that preprocessing happens only once, but you are running augmentations once per epoch. AugLy operates on text directly, so this breaks the typical workflow where we would run the tokenizer once, set format to pt tensors and be ready for the Dataloader. **Describe alternatives you've considered** One possible way of implementing these is to make a custom Dataset class where getitem(i) runs the augmentation and the tokenizer every time, though this would slow training down considerably given we wouldn't even run the tokenizer in batches.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2622/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2622/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2621
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2621/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2621/comments
https://api.github.com/repos/huggingface/datasets/issues/2621/events
https://github.com/huggingface/datasets/pull/2621
940,916,446
MDExOlB1bGxSZXF1ZXN0Njg2OTE1Mzcw
2,621
Use prefix to allow exceed Windows MAX_PATH
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Does this mean the `FileNotFoundError` that avoids infinite loop can be removed?", "Yes, I think so...", "Or maybe we could leave it in case a relative path exceeds the MAX_PATH limit?", " > Or maybe we could leave it in case a relative path exceeds the MAX_PATH limit?\r\n\r\nWhat about converting relative paths to absolute?", "Nice ! Have you had a chance to test it on a windows machine with the max path limit enabled ? Afaik the CI doesn't have the path limit", "Sure @lhoestq: I've tested on my machine... And this fixes most of the tests... 😅 " ]
1,625,848,793,000
1,626,449,292,000
1,626,449,291,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2621", "html_url": "https://github.com/huggingface/datasets/pull/2621", "diff_url": "https://github.com/huggingface/datasets/pull/2621.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2621.patch", "merged_at": 1626449291000 }
By using this prefix, you can exceed the Windows MAX_PATH limit. See: https://docs.microsoft.com/en-us/windows/win32/fileio/naming-a-file?redirectedfrom=MSDN#win32-file-namespaces Related to #2524, #2220.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2621/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2621/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2620
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2620/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2620/comments
https://api.github.com/repos/huggingface/datasets/issues/2620/events
https://github.com/huggingface/datasets/pull/2620
940,893,389
MDExOlB1bGxSZXF1ZXN0Njg2ODk3MDky
2,620
Add speech processing tasks
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Are there any `task_categories:automatic-speech-recognition` dataset for which we should update the tags ?", "> Are there any `task_categories:automatic-speech-recognition` dataset for which we should update the tags ?\r\n\r\nYes there's a few - I'll fix them tomorrow :)" ]
1,625,846,849,000
1,626,114,779,000
1,626,111,122,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2620", "html_url": "https://github.com/huggingface/datasets/pull/2620", "diff_url": "https://github.com/huggingface/datasets/pull/2620.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2620.patch", "merged_at": 1626111122000 }
This PR replaces the `automatic-speech-recognition` task category with a broader `speech-processing` category. The tasks associated with this category are derived from the [SUPERB benchmark](https://arxiv.org/abs/2105.01051), and ASR is included in this set.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2620/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2620/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2619
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2619/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2619/comments
https://api.github.com/repos/huggingface/datasets/issues/2619/events
https://github.com/huggingface/datasets/pull/2619
940,858,236
MDExOlB1bGxSZXF1ZXN0Njg2ODY3NDA4
2,619
Add ASR task for SUPERB
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[ "Wait until #2620 is merged before pushing the README tags in this PR", "> Thanks!\r\n> \r\n> One question: aren't you adding `task_templates` to the `_info` method (and to the `dataset_infos.json`?\r\n\r\ngreat catch! i've now added the asr task template (along with a mapping from superb task -> template) and updated the `dataset_infos.json` :) ", "> Good!\r\n> \r\n> I have a suggested refactoring... Tell me what you think! :)\r\n\r\nyour approach is much more elegant - i've included your suggestions 🙏 " ]
1,625,843,985,000
1,626,339,358,000
1,626,180,018,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2619", "html_url": "https://github.com/huggingface/datasets/pull/2619", "diff_url": "https://github.com/huggingface/datasets/pull/2619.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2619.patch", "merged_at": 1626180018000 }
This PR starts building up the SUPERB benchmark by including the ASR task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/v0.2.0/downstream#asr-automatic-speech-recognition). Usage: ```python from datasets import load_dataset asr = load_dataset("superb", "asr") # DatasetDict({ # train: Dataset({ # features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'], # num_rows: 28539 # }) # validation: Dataset({ # features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'], # num_rows: 2703 # }) # test: Dataset({ # features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'], # num_rows: 2620 # }) # }) ``` I've used the GLUE benchmark as a guide for filling out the README. To move fast during the evaluation PoC I propose to merge one task at a time, so we can continue building the training / evaluation framework in parallel. Note: codewise this PR is ready for review - I'll add the missing YAML tags once #2620 is merged :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2619/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2619/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2618
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2618/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2618/comments
https://api.github.com/repos/huggingface/datasets/issues/2618/events
https://github.com/huggingface/datasets/issues/2618
940,852,640
MDU6SXNzdWU5NDA4NTI2NDA=
2,618
`filelock.py` Error
{ "login": "liyucheng09", "id": 27999909, "node_id": "MDQ6VXNlcjI3OTk5OTA5", "avatar_url": "https://avatars.githubusercontent.com/u/27999909?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liyucheng09", "html_url": "https://github.com/liyucheng09", "followers_url": "https://api.github.com/users/liyucheng09/followers", "following_url": "https://api.github.com/users/liyucheng09/following{/other_user}", "gists_url": "https://api.github.com/users/liyucheng09/gists{/gist_id}", "starred_url": "https://api.github.com/users/liyucheng09/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liyucheng09/subscriptions", "organizations_url": "https://api.github.com/users/liyucheng09/orgs", "repos_url": "https://api.github.com/users/liyucheng09/repos", "events_url": "https://api.github.com/users/liyucheng09/events{/privacy}", "received_events_url": "https://api.github.com/users/liyucheng09/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi @liyucheng09, thanks for reporting.\r\n\r\nApparently this issue has to do with your environment setup. One question: is your data in an NFS share? Some people have reported this error when using `fcntl` to write to an NFS share... If this is the case, then it might be that your NFS just may not be set up to provide file locks. You should ask your system administrator, or try these commands in the terminal:\r\n```shell\r\nsudo systemctl enable rpc-statd\r\nsudo systemctl start rpc-statd\r\n```" ]
1,625,843,569,000
1,626,070,830,000
null
NONE
null
null
null
## Describe the bug It seems that the `filelock.py` went error. ``` >>> ds=load_dataset('xsum') ^CTraceback (most recent call last): File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB) OSError: [Errno 37] No locks available ``` According to error log, it is OSError, but there is an `except` in the `_acquire` function. ``` def _acquire(self): open_mode = os.O_WRONLY | os.O_CREAT | os.O_EXCL | os.O_TRUNC try: fd = os.open(self._lock_file, open_mode) except (IOError, OSError): pass else: self._lock_file_fd = fd return None ``` I don't know why it stucked rather than `pass` directly. I am not quite familiar with filelock operation, so any help is highly appriciated. ## Steps to reproduce the bug ```python ds = load_dataset('xsum') ``` ## Expected results A clear and concise description of the expected results. ## Actual results ``` >>> ds=load_dataset('xsum') ^CTraceback (most recent call last): File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB) OSError: [Errno 37] No locks available During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/load.py", line 818, in load_dataset use_auth_token=use_auth_token, File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/load.py", line 470, in prepare_module with FileLock(lock_path): File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 323, in __enter__ self.acquire() File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 272, in acquire self._acquire() File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB) KeyboardInterrupt ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Linux-4.15.0-135-generic-x86_64-with-debian-buster-sid - Python version: 3.6.13 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2618/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2618/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2617
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2617/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2617/comments
https://api.github.com/repos/huggingface/datasets/issues/2617/events
https://github.com/huggingface/datasets/pull/2617
940,846,847
MDExOlB1bGxSZXF1ZXN0Njg2ODU3NzQz
2,617
Fix missing EOL issue in to_json for old versions of pandas
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[]
1,625,843,145,000
1,626,098,940,000
1,625,844,513,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2617", "html_url": "https://github.com/huggingface/datasets/pull/2617", "diff_url": "https://github.com/huggingface/datasets/pull/2617.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2617.patch", "merged_at": 1625844513000 }
Some versions of pandas don't add an EOL at the end of the output of `to_json`. Therefore users could end up having two samples in the same line Close https://github.com/huggingface/datasets/issues/2615
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2617/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2617/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2616
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2616/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2616/comments
https://api.github.com/repos/huggingface/datasets/issues/2616/events
https://github.com/huggingface/datasets/pull/2616
940,799,038
MDExOlB1bGxSZXF1ZXN0Njg2ODE3NjYz
2,616
Support remote data files
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[ "@lhoestq maybe we could also use (if available) the ETag of the remote file in `create_config_id`?", "> @lhoestq maybe we could also use (if available) the ETag of the remote file in `create_config_id`?\r\n\r\nSure ! We can get the ETag with\r\n```python\r\nheaders = get_authentication_headers_for_url(url, use_auth_token=use_auth_token) # auth for private repos\r\netag = http_head(url, headers=headers).headers.get(\"ETag\")\r\n```\r\n\r\nSince the computation of the `config_id` is done in the `DatasetBuilder.__init__`, then this means that we need to add a new parameter `use_auth_token` in `DatasetBuilder.__init__`\r\n\r\nDoes that sound good ? We can add this in a following PR" ]
1,625,839,658,000
1,625,847,221,000
1,625,847,221,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2616", "html_url": "https://github.com/huggingface/datasets/pull/2616", "diff_url": "https://github.com/huggingface/datasets/pull/2616.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2616.patch", "merged_at": 1625847221000 }
Add support for (streaming) remote data files: ```python data_files = f"https://huggingface.co/datasets/{repo_id}/resolve/main/{relative_file_path}" ds = load_dataset("json", split="train", data_files=data_files, streaming=True) ``` cc: @thomwolf
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2616/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2616/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2615
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2615/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2615/comments
https://api.github.com/repos/huggingface/datasets/issues/2615/events
https://github.com/huggingface/datasets/issues/2615
940,794,339
MDU6SXNzdWU5NDA3OTQzMzk=
2,615
Jsonlines export error
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting @TevenLeScao! I'm having a look...", "(not sure what just happened on the assignations sorry)", "For some reason this happens (both `datasets` version are on master) only on Python 3.6 and not Python 3.8.", "@TevenLeScao we are using `pandas` to serialize the dataset to JSON Lines. So it must be due to pandas. Could you please check the pandas version causing the issue?", "@TevenLeScao I have just checked it: this was a bug in `pandas` and it was fixed in version 1.2: https://github.com/pandas-dev/pandas/pull/36898", "Thanks ! I'm creating a PR", "Well I though it was me who has taken on this issue... 😅 ", "Sorry, I was also talking to teven offline so I already had the PR ready before noticing x)", "I was also already working in my PR... Nevermind. Next time we should pay attention if there is somebody (self-)assigned to an issue and if he/she is still working on it before overtaking it... 😄 ", "The fix is available on `master` @TevenLeScao , thanks for reporting" ]
1,625,839,325,000
1,625,844,547,000
1,625,844,513,000
MEMBER
null
null
null
## Describe the bug When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default ## Steps to reproduce the bug This what I'm running: in python: ``` from datasets import load_dataset ptb = load_dataset("ptb_text_only") ptb["train"].to_json("ptb.jsonl") ``` then out of python: ``` head -10000 ptb.jsonl ``` ## Expected results Properly separated lines ## Actual results The last line is a concatenation of two lines ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.1.dev0 - Platform: Linux-5.4.0-1046-gcp-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2615/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2615/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2614
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2614/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2614/comments
https://api.github.com/repos/huggingface/datasets/issues/2614/events
https://github.com/huggingface/datasets/pull/2614
940,762,427
MDExOlB1bGxSZXF1ZXN0Njg2Nzg2NTg3
2,614
Convert numpy scalar to python float in Pearsonr output
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[]
1,625,836,975,000
1,626,099,182,000
1,625,839,478,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2614", "html_url": "https://github.com/huggingface/datasets/pull/2614", "diff_url": "https://github.com/huggingface/datasets/pull/2614.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2614.patch", "merged_at": 1625839478000 }
Following of https://github.com/huggingface/datasets/pull/2612
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2614/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2614/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2613
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2613/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2613/comments
https://api.github.com/repos/huggingface/datasets/issues/2613/events
https://github.com/huggingface/datasets/pull/2613
940,759,852
MDExOlB1bGxSZXF1ZXN0Njg2Nzg0MzY0
2,613
Use ndarray.item instead of ndarray.tolist
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[]
1,625,836,775,000
1,626,099,177,000
1,625,838,605,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2613", "html_url": "https://github.com/huggingface/datasets/pull/2613", "diff_url": "https://github.com/huggingface/datasets/pull/2613.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2613.patch", "merged_at": 1625838605000 }
This PR follows up on #2612 to use `numpy.ndarray.item` instead of `numpy.ndarray.tolist` as the latter is somewhat confusing to the developer (even though it works). Judging from the `numpy` docs, `ndarray.item` is closer to what we want: https://numpy.org/doc/stable/reference/generated/numpy.ndarray.item.html#numpy-ndarray-item PS. Sorry for the duplicate work here. I should have read the numpy docs more carefully in #2612
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2613/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2613/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2612
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2612/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2612/comments
https://api.github.com/repos/huggingface/datasets/issues/2612/events
https://github.com/huggingface/datasets/pull/2612
940,604,512
MDExOlB1bGxSZXF1ZXN0Njg2NjUwMjk3
2,612
Return Python float instead of numpy.float64 in sklearn metrics
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[ "I opened an issue on the `sklearn` repo to understand why `numpy.float64` is the default: https://github.com/scikit-learn/scikit-learn/discussions/20490", "It could be surprising at first to use `tolist()` on numpy scalars but it works ^^", "did the same for Pearsonr here: https://github.com/huggingface/datasets/pull/2614" ]
1,625,824,089,000
1,626,099,173,000
1,625,835,834,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2612", "html_url": "https://github.com/huggingface/datasets/pull/2612", "diff_url": "https://github.com/huggingface/datasets/pull/2612.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2612.patch", "merged_at": 1625835834000 }
This PR converts the return type of all `sklearn` metrics to be Python `float` instead of `numpy.float64`. The reason behind this is that our Hub evaluation framework relies on converting benchmark-specific metrics to YAML ([example](https://huggingface.co/datasets/autonlp/autonlp-benchmark-raft-neelalex__raft-test-neelalex__raft-predictions-3/blob/main/README.md#L11)) and the `numpy.float64` format produces garbage like: ```python import yaml from datasets import load_metric metric = load_metric("accuracy") score = metric.compute(predictions=[0,1], references=[0,1]) print(yaml.dump(score["accuracy"])) # output below # !!python/object/apply:numpy.core.multiarray.scalar # - !!python/object/apply:numpy.dtype # args: # - f8 # - false # - true # state: !!python/tuple # - 3 # - < # - null # - null # - null # - -1 # - -1 # - 0 # - !!binary | # AAAAAAAA8D8= ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2612/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2612/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2611
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2611/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2611/comments
https://api.github.com/repos/huggingface/datasets/issues/2611/events
https://github.com/huggingface/datasets/pull/2611
940,307,053
MDExOlB1bGxSZXF1ZXN0Njg2Mzk5MjU3
2,611
More consistent naming
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,625,789,357,000
1,626,196,399,000
1,626,192,510,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2611", "html_url": "https://github.com/huggingface/datasets/pull/2611", "diff_url": "https://github.com/huggingface/datasets/pull/2611.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2611.patch", "merged_at": 1626192510000 }
As per @stas00's suggestion in #2500, this PR inserts a space between the logo and the lib name (`🤗Datasets` -> `🤗 Datasets`) for consistency with the Transformers lib. Additionally, more consistent names are used for Datasets Hub, etc.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2611/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2611/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2610
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2610/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2610/comments
https://api.github.com/repos/huggingface/datasets/issues/2610/events
https://github.com/huggingface/datasets/pull/2610
939,899,829
MDExOlB1bGxSZXF1ZXN0Njg2MDUwMzI5
2,610
Add missing WikiANN language tags
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[]
1,625,753,281,000
1,626,099,136,000
1,625,759,044,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2610", "html_url": "https://github.com/huggingface/datasets/pull/2610", "diff_url": "https://github.com/huggingface/datasets/pull/2610.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2610.patch", "merged_at": 1625759044000 }
Add missing language tags for WikiANN datasets.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2610/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2610/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2609
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2609/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2609/comments
https://api.github.com/repos/huggingface/datasets/issues/2609/events
https://github.com/huggingface/datasets/pull/2609
939,616,682
MDExOlB1bGxSZXF1ZXN0Njg1ODA3MTMz
2,609
Fix potential DuplicatedKeysError
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[ "Finally, I'm splitting this PR." ]
1,625,733,484,000
1,626,099,196,000
1,625,848,928,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2609", "html_url": "https://github.com/huggingface/datasets/pull/2609", "diff_url": "https://github.com/huggingface/datasets/pull/2609.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2609.patch", "merged_at": 1625848928000 }
Fix potential DiplicatedKeysError by ensuring keys are unique. We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2609/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2609/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2608
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2608/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2608/comments
https://api.github.com/repos/huggingface/datasets/issues/2608/events
https://github.com/huggingface/datasets/pull/2608
938,897,626
MDExOlB1bGxSZXF1ZXN0Njg1MjAwMDYw
2,608
Support streaming JSON files
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[]
1,625,664,622,000
1,626,099,151,000
1,625,760,521,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2608", "html_url": "https://github.com/huggingface/datasets/pull/2608", "diff_url": "https://github.com/huggingface/datasets/pull/2608.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2608.patch", "merged_at": 1625760520000 }
Use open in JSON dataset builder, so that it can be patched with xopen for streaming. Close #2607.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2608/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2608/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2607
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2607/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2607/comments
https://api.github.com/repos/huggingface/datasets/issues/2607/events
https://github.com/huggingface/datasets/issues/2607
938,796,902
MDU6SXNzdWU5Mzg3OTY5MDI=
2,607
Streaming local gzip compressed JSON line files is not working
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Updating to pyarrow-4.0.1 didn't fix the issue", "Here is an exemple dataset with 2 of these compressed JSON files: https://huggingface.co/datasets/thomwolf/github-python", "Hi @thomwolf, thanks for reporting.\r\n\r\nIt seems this might be due to the fact that the JSON Dataset builder uses `pyarrow.json` (`paj.read_json`) to read the data without using the Python standard `open(file,...` (which is the one patched with `xopen` to work in streaming mode).\r\n\r\nThis has to be fixed.", "Sorry for reopening this, but I'm having the same issue as @thomwolf when streaming a gzipped JSON Lines file from the hub. Or is that just not possible by definition?\r\nI installed `datasets`in editable mode from source (so probably includes the fix from #2608 ?): \r\n```\r\n>>> datasets.__version__\r\n'1.9.1.dev0'\r\n```\r\n\r\n```\r\n>>> msmarco = datasets.load_dataset(\"webis/msmarco\", \"corpus\", streaming=True)\r\nUsing custom data configuration corpus-174d3b7155eb68db\r\n>>> msmarco_iter = iter(msmarco['train'])\r\n>>> print(next(msmarco_iter))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/media/ssd/TREC/msmarco/datasets/src/datasets/iterable_dataset.py\", line 338, in __iter__\r\n for key, example in self._iter():\r\n File \"/media/ssd/TREC/msmarco/datasets/src/datasets/iterable_dataset.py\", line 335, in _iter\r\n yield from ex_iterable\r\n File \"/media/ssd/TREC/msmarco/datasets/src/datasets/iterable_dataset.py\", line 78, in __iter__\r\n for key, example in self.generate_examples_fn(**self.kwargs):\r\n File \"/home/christopher/.cache/huggingface/modules/datasets_modules/datasets/msmarco/eb63dff8d83107168e973c7a655a6082d37e08d71b4ac39a0afada479c138745/msmarco.py\", line 96, in _generate_examples\r\n with gzip.open(file, \"rt\", encoding=\"utf-8\") as f:\r\n File \"/usr/lib/python3.6/gzip.py\", line 53, in open\r\n binary_file = GzipFile(filename, gz_mode, compresslevel)\r\n File \"/usr/lib/python3.6/gzip.py\", line 163, in __init__\r\n fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')\r\nFileNotFoundError: [Errno 2] No such file or directory: 'https://huggingface.co/datasets/webis/msmarco/resolve/main/msmarco_doc_00.gz'\r\n```\r\n\r\nLoading the dataset without streaming set to True, works fine.", "Hi ! To make the streaming work, we extend `open` in the dataset builder to work with urls.\r\n\r\nTherefore you just need to use `open` before using `gzip.open`:\r\n```diff\r\n- with gzip.open(file, \"rt\", encoding=\"utf-8\") as f:\r\n+ with gzip.open(open(file, \"rb\"), \"rt\", encoding=\"utf-8\") as f:\r\n```\r\n\r\nYou can see that it is the case for oscar.py and c4.py for example:\r\n\r\nhttps://github.com/huggingface/datasets/blob/8814b393984c1c2e1800ba370de2a9f7c8644908/datasets/oscar/oscar.py#L358-L358\r\n\r\nhttps://github.com/huggingface/datasets/blob/8814b393984c1c2e1800ba370de2a9f7c8644908/datasets/c4/c4.py#L88-L88\r\n\r\n", "@lhoestq Sorry I missed that. Thank you Quentin!" ]
1,625,657,793,000
1,626,774,619,000
1,625,760,521,000
MEMBER
null
null
null
## Describe the bug Using streaming to iterate on local gzip compressed JSON files raise a file not exist error ## Steps to reproduce the bug ```python from datasets import load_dataset streamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True) next(iter(streamed_dataset)) ``` ## Actual results ``` FileNotFoundError Traceback (most recent call last) <ipython-input-6-27a664e29784> in <module> ----> 1 next(iter(streamed_dataset)) ~/Documents/GitHub/datasets/src/datasets/iterable_dataset.py in __iter__(self) 336 337 def __iter__(self): --> 338 for key, example in self._iter(): 339 if self.features: 340 # we encode the example for ClassLabel feature types for example ~/Documents/GitHub/datasets/src/datasets/iterable_dataset.py in _iter(self) 333 else: 334 ex_iterable = self._ex_iterable --> 335 yield from ex_iterable 336 337 def __iter__(self): ~/Documents/GitHub/datasets/src/datasets/iterable_dataset.py in __iter__(self) 76 77 def __iter__(self): ---> 78 for key, example in self.generate_examples_fn(**self.kwargs): 79 yield key, example 80 ~/Documents/GitHub/datasets/src/datasets/iterable_dataset.py in wrapper(**kwargs) 282 def wrapper(**kwargs): 283 python_formatter = PythonFormatter() --> 284 for key, table in generate_tables_fn(**kwargs): 285 batch = python_formatter.format_batch(table) 286 for i, example in enumerate(_batch_to_examples(batch)): ~/Documents/GitHub/datasets/src/datasets/packaged_modules/json/json.py in _generate_tables(self, files, original_files) 85 file, 86 read_options=self.config.pa_read_options, ---> 87 parse_options=self.config.pa_parse_options, 88 ) 89 except pa.ArrowInvalid as err: ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/_json.pyx in pyarrow._json.read_json() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/_json.pyx in pyarrow._json._get_reader() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.get_input_stream() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.get_native_file() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.OSFile.__cinit__() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.OSFile._open_readable() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() FileNotFoundError: [Errno 2] Failed to open local file 'gzip://file-000000000000.json::/Users/thomwolf/github-dataset/file-000000000000.json.gz'. Detail: [errno 2] No such file or directory ``` ## Environment info - `datasets` version: 1.9.1.dev0 - Platform: Darwin-19.6.0-x86_64-i386-64bit - Python version: 3.7.7 - PyArrow version: 1.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2607/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2607/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2606
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2606/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2606/comments
https://api.github.com/repos/huggingface/datasets/issues/2606/events
https://github.com/huggingface/datasets/issues/2606
938,763,684
MDU6SXNzdWU5Mzg3NjM2ODQ=
2,606
[Metrics] addition of wiki_split metrics
{ "login": "bhadreshpsavani", "id": 26653468, "node_id": "MDQ6VXNlcjI2NjUzNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhadreshpsavani", "html_url": "https://github.com/bhadreshpsavani", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2459308248, "node_id": "MDU6TGFiZWwyNDU5MzA4MjQ4", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20request", "name": "metric request", "color": "d4c5f9", "default": false, "description": "Requesting to add a new metric" } ]
closed
false
{ "login": "bhadreshpsavani", "id": 26653468, "node_id": "MDQ6VXNlcjI2NjUzNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhadreshpsavani", "html_url": "https://github.com/bhadreshpsavani", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "type": "User", "site_admin": false }
[ { "login": "bhadreshpsavani", "id": 26653468, "node_id": "MDQ6VXNlcjI2NjUzNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhadreshpsavani", "html_url": "https://github.com/bhadreshpsavani", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "type": "User", "site_admin": false } ]
null
[ "#take" ]
1,625,655,364,000
1,626,129,271,000
1,626,129,271,000
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** While training the model on sentence split the task in English we require to evaluate the trained model on `Exact Match`, `SARI` and `BLEU` score like this ![image](https://user-images.githubusercontent.com/26653468/124746876-ff5a3380-df3e-11eb-9a01-4b48db7a6694.png) While training we require metrics which can give all the output Currently, we don't have an exact match for text normalized data **Describe the solution you'd like** A custom metrics for wiki_split that can calculate these three values and provide it in the form of a single dictionary For exact match, we can refer to [this](https://github.com/huggingface/transformers/blob/master/src/transformers/data/metrics/squad_metrics.py) **Describe alternatives you've considered** Two metrics are already present one more can be added for an exact match then we can run all three metrics in training script #self-assign
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2606/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2606/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2605
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2605/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2605/comments
https://api.github.com/repos/huggingface/datasets/issues/2605/events
https://github.com/huggingface/datasets/pull/2605
938,648,164
MDExOlB1bGxSZXF1ZXN0Njg0OTkyODIz
2,605
Make any ClientError trigger retry in streaming mode (e.g. ClientOSError)
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[]
1,625,647,643,000
1,626,099,027,000
1,625,648,353,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2605", "html_url": "https://github.com/huggingface/datasets/pull/2605", "diff_url": "https://github.com/huggingface/datasets/pull/2605.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2605.patch", "merged_at": 1625648353000 }
During the FLAX sprint some users have this error when streaming datasets: ```python aiohttp.client_exceptions.ClientOSError: [Errno 104] Connection reset by peer ``` This error must trigger a retry instead of directly crashing Therefore I extended the error type that triggers the retry to be the base aiohttp error type: `ClientError` In particular both `ClientOSError` and `ServerDisconnectedError` inherit from `ClientError`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2605/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2605/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2604
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2604/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2604/comments
https://api.github.com/repos/huggingface/datasets/issues/2604/events
https://github.com/huggingface/datasets/issues/2604
938,602,237
MDU6SXNzdWU5Mzg2MDIyMzc=
2,604
Add option to delete temporary files (e.g. extracted files) when loading dataset
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[ "Hi !\r\nIf we want something more general, we could either\r\n1. delete the extracted files after the arrow data generation automatically, or \r\n2. delete each extracted file during the arrow generation right after it has been closed.\r\n\r\nSolution 2 is better to save disk space during the arrow generation. Is it what you had in mind ?\r\n\r\nThe API could look like\r\n```python\r\nload_dataset(..., delete_extracted_files_after_usage=True)\r\n```\r\n\r\nIn terms of implementation, here are some directions we could take for each solution:\r\n1. get the list of the extracted files from the DownloadManager and then delete them after the dataset is processed. This can be implemented in `download_and_prepare` I guess\r\n2. maybe wrap and mock `open` in the builder to make it delete the file when the file is closed.", "Also, if I delete the extracted files they need to be re-extracted again instead of loading from the Arrow cache files", "I think we already opened an issue about this topic (suggested by @stas00): duplicated of #2481?\r\n\r\nThis is in our TODO list... 😅 ", "I think the deletion of each extracted file could be implemented in our CacheManager and ExtractManager (once merged to master: #2295, #2277). 😉 ", "Oh yes sorry, I didn't check if this was a duplicate", "Nevermind @thomwolf, I just mentioned the other issue so that both appear linked in GitHub and we do not forget to close both once we make the corresponding Pull Request... That was the main reason! 😄 ", "Ok yes. I think this is an important feature to be able to use large datasets which are pretty much always compressed files.\r\n\r\nIn particular now this requires to keep the extracted file on the drive if you want to avoid reprocessing the dataset so in my case, this require using always ~400GB of drive instead of just 200GB (which is already significant). \r\n\r\nTwo nice features would be to:\r\n- allow to delete the extracted files without loosing the ability to load the dataset from the cached arrow-file\r\n- streamlined decompression when only the currently read file is extracted - this might require to read the list of files from the extracted archives before processing them?", "Here is a sample dataset with 2 such large compressed JSON files for debugging: https://huggingface.co/datasets/thomwolf/github-python", "Note that I'm confirming that with the current master branch of dataset, deleting extracted files (without deleting the arrow cache file) lead to **re-extracting** these files when reloading the dataset instead of directly loading the arrow cache file.", "Hi ! That's weird, it doesn't do that on my side (tested on master on my laptop by deleting the `extracted` folder in the download cache directory). You tested with one of the files at https://huggingface.co/datasets/thomwolf/github-python that you have locally ?", "Yes it’s when I load local compressed JSON line files with load_dataset(‘json’, data_files=…) ", "@thomwolf I'm sorry but I can't reproduce this problem. I'm also using: \r\n```python\r\nds = load_dataset(\"json\", split=\"train\", data_files=data_files, cache_dir=cache_dir)\r\n```\r\nafter having removed the extracted files:\r\n```python\r\nassert sorted((cache_dir / \"downloads\" / \"extracted\").iterdir()) == []\r\n```\r\n\r\nI get the logging message:\r\n```shell\r\nWARNING datasets.builder:builder.py:531 Reusing dataset json ...\r\n```", "Do you confirm the extracted folder stays empty after reloading?", "> \r\n> \r\n> Do you confirm the extracted folder stays empty after reloading?\r\n\r\nYes, I have the above mentioned assertion on the emptiness of the extracted folder:\r\n```python\r\nassert sorted((cache_dir / \"downloads\" / \"extracted\").iterdir()) == []\r\n```\r\n" ]
1,625,644,576,000
1,626,685,698,000
1,626,685,698,000
MEMBER
null
null
null
I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Having a simple way to delete the extracted files after usage (or even better, to stream extraction/delete) would be nice to avoid disk cluter. I can maybe tackle this one in the JSON script unless you want a more general solution.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2604/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2604/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2603
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2603/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2603/comments
https://api.github.com/repos/huggingface/datasets/issues/2603/events
https://github.com/huggingface/datasets/pull/2603
938,588,149
MDExOlB1bGxSZXF1ZXN0Njg0OTQ0ODcz
2,603
Fix DuplicatedKeysError in omp
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[]
1,625,643,512,000
1,626,099,041,000
1,625,662,595,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2603", "html_url": "https://github.com/huggingface/datasets/pull/2603", "diff_url": "https://github.com/huggingface/datasets/pull/2603.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2603.patch", "merged_at": 1625662595000 }
Close #2598.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2603/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2603/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2602
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2602/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2602/comments
https://api.github.com/repos/huggingface/datasets/issues/2602/events
https://github.com/huggingface/datasets/pull/2602
938,555,712
MDExOlB1bGxSZXF1ZXN0Njg0OTE5MjMy
2,602
Remove import of transformers
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[]
1,625,641,098,000
1,626,099,022,000
1,625,646,531,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2602", "html_url": "https://github.com/huggingface/datasets/pull/2602", "diff_url": "https://github.com/huggingface/datasets/pull/2602.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2602.patch", "merged_at": 1625646531000 }
When pickling a tokenizer within multiprocessing, check that is instance of transformers PreTrainedTokenizerBase without importing transformers. Related to huggingface/transformers#12549 and #502.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2602/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2602/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2601
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2601/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2601/comments
https://api.github.com/repos/huggingface/datasets/issues/2601/events
https://github.com/huggingface/datasets/pull/2601
938,096,396
MDExOlB1bGxSZXF1ZXN0Njg0NTQyNjY5
2,601
Fix `filter` with multiprocessing in case all samples are discarded
{ "login": "mxschmdt", "id": 4904985, "node_id": "MDQ6VXNlcjQ5MDQ5ODU=", "avatar_url": "https://avatars.githubusercontent.com/u/4904985?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxschmdt", "html_url": "https://github.com/mxschmdt", "followers_url": "https://api.github.com/users/mxschmdt/followers", "following_url": "https://api.github.com/users/mxschmdt/following{/other_user}", "gists_url": "https://api.github.com/users/mxschmdt/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxschmdt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxschmdt/subscriptions", "organizations_url": "https://api.github.com/users/mxschmdt/orgs", "repos_url": "https://api.github.com/users/mxschmdt/repos", "events_url": "https://api.github.com/users/mxschmdt/events{/privacy}", "received_events_url": "https://api.github.com/users/mxschmdt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[]
1,625,591,188,000
1,626,099,035,000
1,625,662,231,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2601", "html_url": "https://github.com/huggingface/datasets/pull/2601", "diff_url": "https://github.com/huggingface/datasets/pull/2601.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2601.patch", "merged_at": 1625662231000 }
Fixes #2600 Also I moved the check for `num_proc` larger than dataset size added in #2566 up so that multiprocessing is not used with one process.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2601/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2601/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2600
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2600/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2600/comments
https://api.github.com/repos/huggingface/datasets/issues/2600/events
https://github.com/huggingface/datasets/issues/2600
938,086,745
MDU6SXNzdWU5MzgwODY3NDU=
2,600
Crash when using multiprocessing (`num_proc` > 1) on `filter` and all samples are discarded
{ "login": "mxschmdt", "id": 4904985, "node_id": "MDQ6VXNlcjQ5MDQ5ODU=", "avatar_url": "https://avatars.githubusercontent.com/u/4904985?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxschmdt", "html_url": "https://github.com/mxschmdt", "followers_url": "https://api.github.com/users/mxschmdt/followers", "following_url": "https://api.github.com/users/mxschmdt/following{/other_user}", "gists_url": "https://api.github.com/users/mxschmdt/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxschmdt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxschmdt/subscriptions", "organizations_url": "https://api.github.com/users/mxschmdt/orgs", "repos_url": "https://api.github.com/users/mxschmdt/repos", "events_url": "https://api.github.com/users/mxschmdt/events{/privacy}", "received_events_url": "https://api.github.com/users/mxschmdt/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
1,625,590,405,000
1,625,662,231,000
1,625,662,231,000
CONTRIBUTOR
null
null
null
## Describe the bug If `filter` is applied to a dataset using multiprocessing (`num_proc` > 1) and all sharded datasets are empty afterwards (due to all samples being discarded), the program crashes. ## Steps to reproduce the bug ```python from datasets import Dataset data = Dataset.from_dict({'id': [0,1]}) data.filter(lambda x: False, num_proc=2) ``` ## Expected results An empty table should be returned without crashing. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/user/venv/lib/python3.8/site-packages/datasets/fingerprint.py", line 397, in wrapper out = func(self, *args, **kwargs) File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2143, in filter return self.map( File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1738, in map result = concatenate_datasets(transformed_shards) File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3267, in concatenate_datasets table = concat_tables(tables_to_concat, axis=axis) File "/home/user/venv/lib/python3.8/site-packages/datasets/table.py", line 853, in concat_tables return ConcatenationTable.from_tables(tables, axis=axis) File "/home/user/venv/lib/python3.8/site-packages/datasets/table.py", line 713, in from_tables blocks = to_blocks(tables[0]) IndexError: list index out of range ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Linux-5.12.11-300.fc34.x86_64-x86_64-with-glibc2.2.5 - Python version: 3.8.10 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2600/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2600/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2599
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2599/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2599/comments
https://api.github.com/repos/huggingface/datasets/issues/2599/events
https://github.com/huggingface/datasets/pull/2599
937,980,229
MDExOlB1bGxSZXF1ZXN0Njg0NDQ2MTYx
2,599
Update processing.rst with other export formats
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[]
1,625,583,038,000
1,626,099,016,000
1,625,645,148,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2599", "html_url": "https://github.com/huggingface/datasets/pull/2599", "diff_url": "https://github.com/huggingface/datasets/pull/2599.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2599.patch", "merged_at": 1625645148000 }
Add other supported export formats than CSV in the docs.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2599/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2599/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2598
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2598/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2598/comments
https://api.github.com/repos/huggingface/datasets/issues/2598/events
https://github.com/huggingface/datasets/issues/2598
937,930,632
MDU6SXNzdWU5Mzc5MzA2MzI=
2,598
Unable to download omp dataset
{ "login": "erikadistefano", "id": 25797960, "node_id": "MDQ6VXNlcjI1Nzk3OTYw", "avatar_url": "https://avatars.githubusercontent.com/u/25797960?v=4", "gravatar_id": "", "url": "https://api.github.com/users/erikadistefano", "html_url": "https://github.com/erikadistefano", "followers_url": "https://api.github.com/users/erikadistefano/followers", "following_url": "https://api.github.com/users/erikadistefano/following{/other_user}", "gists_url": "https://api.github.com/users/erikadistefano/gists{/gist_id}", "starred_url": "https://api.github.com/users/erikadistefano/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/erikadistefano/subscriptions", "organizations_url": "https://api.github.com/users/erikadistefano/orgs", "repos_url": "https://api.github.com/users/erikadistefano/repos", "events_url": "https://api.github.com/users/erikadistefano/events{/privacy}", "received_events_url": "https://api.github.com/users/erikadistefano/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @erikadistefano , thanks for reporting the issue.\r\n\r\nI have created a Pull Request that should fix it. \r\n\r\nOnce merged into master, feel free to update your installed `datasets` library (either by installing it from our GitHub master branch or waiting until our next release) to be able to load omp dataset." ]
1,625,580,052,000
1,625,662,595,000
1,625,662,595,000
NONE
null
null
null
## Describe the bug The omp dataset cannot be downloaded because of a DuplicatedKeysError ## Steps to reproduce the bug from datasets import load_dataset omp = load_dataset('omp', 'posts_labeled') print(omp) ## Expected results This code should download the omp dataset and print the dictionary ## Actual results Downloading and preparing dataset omp/posts_labeled (download: 1.27 MiB, generated: 13.31 MiB, post-processed: Unknown size, total: 14.58 MiB) to /home/erika_distefano/.cache/huggingface/datasets/omp/posts_labeled/1.1.0/2fe5b067be3bff1d4588d5b0cbb9b5b22ae1b9d5b026a8ff572cd389f862735b... 0 examples [00:00, ? examples/s]2021-07-06 09:43:55.868815: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.11.0 Traceback (most recent call last): File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 990, in _prepare_split writer.write(example, key) File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 338, in write self.check_duplicate_keys() File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 349, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 3326 Keys should be unique and deterministic in nature During handling of the above exception, another exception occurred: Traceback (most recent call last): File "hf_datasets.py", line 32, in <module> omp = load_dataset('omp', 'posts_labeled') File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/load.py", line 748, in load_dataset use_auth_token=use_auth_token, File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 992, in _prepare_split num_examples, num_bytes = writer.finalize() File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 409, in finalize self.check_duplicate_keys() File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 349, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 3326 Keys should be unique and deterministic in nature ## Environment info - `datasets` version: 1.8.0 - Platform: Ubuntu 18.04.4 LTS - Python version: 3.6.9 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2598/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2598/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2597
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2597/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2597/comments
https://api.github.com/repos/huggingface/datasets/issues/2597/events
https://github.com/huggingface/datasets/pull/2597
937,917,770
MDExOlB1bGxSZXF1ZXN0Njg0Mzk0MDIz
2,597
Remove redundant prepare_module
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 2851292821, "node_id": "MDU6TGFiZWwyODUxMjkyODIx", "url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring", "name": "refactoring", "color": "B67A40", "default": false, "description": "Restructuring existing code without changing its external behavior" } ]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[]
1,625,579,265,000
1,626,099,052,000
1,625,662,906,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2597", "html_url": "https://github.com/huggingface/datasets/pull/2597", "diff_url": "https://github.com/huggingface/datasets/pull/2597.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2597.patch", "merged_at": 1625662906000 }
I have noticed that after implementing `load_dataset_builder` (#2500), there is a redundant call to `prepare_module`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2597/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2597/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2596
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2596/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2596/comments
https://api.github.com/repos/huggingface/datasets/issues/2596/events
https://github.com/huggingface/datasets/issues/2596
937,598,914
MDU6SXNzdWU5Mzc1OTg5MTQ=
2,596
Transformer Class on dataset
{ "login": "arita37", "id": 18707623, "node_id": "MDQ6VXNlcjE4NzA3NjIz", "avatar_url": "https://avatars.githubusercontent.com/u/18707623?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arita37", "html_url": "https://github.com/arita37", "followers_url": "https://api.github.com/users/arita37/followers", "following_url": "https://api.github.com/users/arita37/following{/other_user}", "gists_url": "https://api.github.com/users/arita37/gists{/gist_id}", "starred_url": "https://api.github.com/users/arita37/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arita37/subscriptions", "organizations_url": "https://api.github.com/users/arita37/orgs", "repos_url": "https://api.github.com/users/arita37/repos", "events_url": "https://api.github.com/users/arita37/events{/privacy}", "received_events_url": "https://api.github.com/users/arita37/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi ! Do you have an example in mind that shows how this could be useful ?", "Example:\n\nMerge 2 datasets into one datasets\n\nLabel extraction from dataset\n\ndataset(text, label)\n —> dataset(text, newlabel)\n\nTextCleaning.\n\n\nFor image dataset, \nTransformation are easier (ie linear algebra).\n\n\n\n\n\n\n> On Jul 6, 2021, at 17:39, Quentin Lhoest ***@***.***> wrote:\n> \n> \n> Hi ! Do you have an example in mind that shows how this could be useful ?\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n", "There are already a few transformations that you can apply on a dataset using methods like `dataset.map()`.\r\nYou can find examples in the documentation here:\r\nhttps://huggingface.co/docs/datasets/processing.html\r\n\r\nYou can merge two datasets with `concatenate_datasets()` or do label extraction with `dataset.map()` for example", "Ok, sure.\n\nThanks for pointing on functional part.\nMy question is more\n“Philosophical”/Design perspective.\n\nThere are 2 perspetive:\n Add transformation methods to \n Dataset Class\n\n\n OR Create a Transformer Class\n which operates on Dataset Class.\n\nT(Dataset) —> Dataset\n\ndatasetnew = MyTransform.transform(dataset)\ndatasetNew.save(path)\n\n\nWhat would be the difficulty\nof implementing a Transformer Class\noperating at dataset level ?\n\n\nthanks\n\n\n\n\n\n\n\n\n\n> On Jul 6, 2021, at 22:00, Quentin Lhoest ***@***.***> wrote:\n> \n> \n> There are already a few transformations that you can apply on a dataset using methods like dataset.map().\n> You can find examples in the documentation here:\n> https://huggingface.co/docs/datasets/processing.html\n> \n> You can merge two datasets with concatenate_datasets() or do label extraction with dataset.map() for example\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n", "I can imagine that this would be a useful API to implement processing pipelines as transforms. They could be used to perform higher level transforms compared to the atomic transforms allowed by methods like map, filter, etc.\r\n\r\nI guess if you find any transform that could be useful for text dataset processing, image dataset processing etc. we could definitely start having such transforms :)", "Thanks for reply.\n\nWhat would be the constraints\nto have\nDataset —> Dataset consistency ?\n\nMain issue would be\nlarger than memory dataset and\nserialization on disk.\n\nTechnically,\none still process at atomic level\nand try to wrap the full results\ninto Dataset…. (!)\n\nWhat would you think ?\n\n\n\n\n\n\n\n\n> On Jul 7, 2021, at 16:51, Quentin Lhoest ***@***.***> wrote:\n> \n> \n> I can imagine that this would be a useful API to implement processing pipelines as transforms. They could be used to perform higher level transforms compared to the atomic transforms allowed by methods like map, filter, etc.\n> \n> I guess if you find any transform that could be useful for text dataset processing, image dataset processing etc. we could definitely start having such transforms :)\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n", "We can be pretty flexible and not impose any constraints for transforms.\r\n\r\nMoreover, this library is designed to support datasets bigger than memory. The datasets are loaded from the disk via memory mapping, without filling up RAM. Even processing functions like `map` work in a batched fashion to not fill up your RAM. So this shouldn't be an issue", "Ok thanks.\n\nBut, Dataset has various flavors.\nIn current design of Dataset,\n how the serialization on disk is done (?)\n\n\nThe main issue is serialization \nof newdataset= Transform(Dataset)\n (ie thats why am referring to Out Of memory dataset…):\n\n Should be part of Transform or part of dataset ?\n\n\n\n\nMaybe, not, since the output is aimed to feed model in memory (?)\n\n\n\n\n\n\n\n\n> On Jul 7, 2021, at 18:04, Quentin Lhoest ***@***.***> wrote:\n> \n> \n> We can be pretty flexible and not impose any constraints for transforms.\n> \n> Moreover, this library is designed to support datasets bigger than memory. The datasets are loaded from the disk via memory mapping, without filling up RAM. Even processing functions like map work in a batched fashion to not fill up your RAM. So this shouldn't be an issue\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n", "I'm not sure I understand, could you elaborate a bit more please ?\r\n\r\nEach dataset is a wrapper of a PyArrow Table that contains all the data. The table is loaded from an arrow file on the disk.\r\nWe have an ArrowWriter and ArrowReader class to write/read arrow tables on disk or in in-memory buffers." ]
1,625,556,435,000
1,625,732,525,000
null
NONE
null
null
null
Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2596/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2596/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2595
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2595/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2595/comments
https://api.github.com/repos/huggingface/datasets/issues/2595/events
https://github.com/huggingface/datasets/issues/2595
937,483,120
MDU6SXNzdWU5Mzc0ODMxMjA=
2,595
ModuleNotFoundError: No module named 'datasets.tasks' while importing common voice datasets
{ "login": "profsatwinder", "id": 41314912, "node_id": "MDQ6VXNlcjQxMzE0OTEy", "avatar_url": "https://avatars.githubusercontent.com/u/41314912?v=4", "gravatar_id": "", "url": "https://api.github.com/users/profsatwinder", "html_url": "https://github.com/profsatwinder", "followers_url": "https://api.github.com/users/profsatwinder/followers", "following_url": "https://api.github.com/users/profsatwinder/following{/other_user}", "gists_url": "https://api.github.com/users/profsatwinder/gists{/gist_id}", "starred_url": "https://api.github.com/users/profsatwinder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/profsatwinder/subscriptions", "organizations_url": "https://api.github.com/users/profsatwinder/orgs", "repos_url": "https://api.github.com/users/profsatwinder/repos", "events_url": "https://api.github.com/users/profsatwinder/events{/privacy}", "received_events_url": "https://api.github.com/users/profsatwinder/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @profsatwinder.\r\n\r\nIt looks like you are using an old version of `datasets`. Please update it with `pip install -U datasets` and indicate if the problem persists.", "@albertvillanova Thanks for the information. I updated it to 1.9.0 and the issue is resolved. Thanks again. " ]
1,625,541,655,000
1,625,551,189,000
1,625,551,189,000
NONE
null
null
null
Error traceback: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-8-a7b592d3bca0> in <module>() 1 from datasets import load_dataset, load_metric 2 ----> 3 common_voice_train = load_dataset("common_voice", "pa-IN", split="train+validation") 4 common_voice_test = load_dataset("common_voice", "pa-IN", split="test") 9 frames /root/.cache/huggingface/modules/datasets_modules/datasets/common_voice/078d412587e9efeb0ae2e574da99c31e18844c496008d53dc5c60f4159ed639b/common_voice.py in <module>() 19 20 import datasets ---> 21 from datasets.tasks import AutomaticSpeechRecognition 22 23 ModuleNotFoundError: No module named 'datasets.tasks'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2595/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2595/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2594
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2594/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2594/comments
https://api.github.com/repos/huggingface/datasets/issues/2594/events
https://github.com/huggingface/datasets/pull/2594
937,294,772
MDExOlB1bGxSZXF1ZXN0NjgzODc0NjIz
2,594
Fix BibTeX entry
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,625,509,450,000
1,625,547,578,000
1,625,547,578,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2594", "html_url": "https://github.com/huggingface/datasets/pull/2594", "diff_url": "https://github.com/huggingface/datasets/pull/2594.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2594.patch", "merged_at": 1625547578000 }
Fix BibTeX entry.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2594/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2594/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2593
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2593/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2593/comments
https://api.github.com/repos/huggingface/datasets/issues/2593/events
https://github.com/huggingface/datasets/pull/2593
937,242,137
MDExOlB1bGxSZXF1ZXN0NjgzODMwMjcy
2,593
Support pandas 1.3.0 read_csv
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,625,503,204,000
1,625,505,254,000
1,625,505,254,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2593", "html_url": "https://github.com/huggingface/datasets/pull/2593", "diff_url": "https://github.com/huggingface/datasets/pull/2593.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2593.patch", "merged_at": 1625505254000 }
Workaround for this issue in pandas 1.3.0 : https://github.com/pandas-dev/pandas/issues/42387 The csv reader raises an error: ```python /usr/local/lib/python3.7/dist-packages/pandas/io/parsers/readers.py in _refine_defaults_read(dialect, delimiter, delim_whitespace, engine, sep, error_bad_lines, warn_bad_lines, on_bad_lines, names, prefix, defaults) 1304 1305 if names is not lib.no_default and prefix is not lib.no_default: -> 1306 raise ValueError("Specified named and prefix; you can only specify one.") 1307 1308 kwds["names"] = None if names is lib.no_default else names ValueError: Specified named and prefix; you can only specify one. ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2593/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2593/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2592
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2592/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2592/comments
https://api.github.com/repos/huggingface/datasets/issues/2592/events
https://github.com/huggingface/datasets/pull/2592
937,060,559
MDExOlB1bGxSZXF1ZXN0NjgzNjc2MjA4
2,592
Add c4.noclean infos
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,625,489,500,000
1,625,490,953,000
1,625,490,952,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2592", "html_url": "https://github.com/huggingface/datasets/pull/2592", "diff_url": "https://github.com/huggingface/datasets/pull/2592.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2592.patch", "merged_at": 1625490952000 }
Adding the data files checksums and the dataset size of the c4.noclean configuration of the C4 dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2592/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2592/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2591
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2591/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2591/comments
https://api.github.com/repos/huggingface/datasets/issues/2591/events
https://github.com/huggingface/datasets/issues/2591
936,957,975
MDU6SXNzdWU5MzY5NTc5NzU=
2,591
Cached dataset overflowing disk space
{ "login": "BirgerMoell", "id": 1704131, "node_id": "MDQ6VXNlcjE3MDQxMzE=", "avatar_url": "https://avatars.githubusercontent.com/u/1704131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BirgerMoell", "html_url": "https://github.com/BirgerMoell", "followers_url": "https://api.github.com/users/BirgerMoell/followers", "following_url": "https://api.github.com/users/BirgerMoell/following{/other_user}", "gists_url": "https://api.github.com/users/BirgerMoell/gists{/gist_id}", "starred_url": "https://api.github.com/users/BirgerMoell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BirgerMoell/subscriptions", "organizations_url": "https://api.github.com/users/BirgerMoell/orgs", "repos_url": "https://api.github.com/users/BirgerMoell/repos", "events_url": "https://api.github.com/users/BirgerMoell/events{/privacy}", "received_events_url": "https://api.github.com/users/BirgerMoell/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! I'm transferring this issue over to `datasets`", "I'm using the datasets concatenate dataset to combine the datasets and then train.\r\ntrain_dataset = concatenate_datasets([dataset1, dataset2, common_voice_train])\r\n\r\n", "Hi @BirgerMoell.\r\n\r\nYou have several options:\r\n- to set caching to be stored on a different path location, other than the default one (`~/.cache/huggingface/datasets`):\r\n - either setting the environment variable `HF_DATASETS_CACHE` with the path to the new cache location\r\n - or by passing it with the parameter `cache_dir` when loading each of the datasets: `dataset = load_dataset(..., cache_dir=your_new_location)`\r\n\r\n You can get all the information in the docs: https://huggingface.co/docs/datasets/loading_datasets.html#cache-directory\r\n- I wouldn't recommend disabling caching, because current implementation generates cache files anyway, although in a temporary directory and they are deleted when the session closes. See details here: https://huggingface.co/docs/datasets/processing.html#enable-or-disable-caching\r\n- You could alternatively load the datasets in streaming mode. This is a new feature which allows loading the datasets without downloading the entire files. More information here: https://huggingface.co/docs/datasets/dataset_streaming.html", "Hi @BirgerMoell,\r\n\r\nWe are planning to add a new feature to datasets, which could be interesting in your case: Add the option to delete temporary files (decompressed files) from the cache directory (see: #2481, #2604).\r\n\r\nWe will ping you once this feature is implemented, so that the size of your cache directory will be considerably reduced." ]
1,625,481,799,000
1,626,685,699,000
1,626,685,699,000
CONTRIBUTOR
null
null
null
I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb). The cache folder is 500gb (and now my disk space is full). Is there a way to toggle caching or set the caching to be stored on a different device (I have another drive with 4 tb that could hold the caching files). This might not technically be a bug, but I was unsure and I felt that the bug was the closest one. Traceback (most recent call last): File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/multiprocess/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 186, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/fingerprint.py", line 397, in wrapper out = func(self, *args, **kwargs) File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1983, in _map_single writer.finalize() File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/arrow_writer.py", line 418, in finalize self.pa_writer.close() File "pyarrow/ipc.pxi", line 402, in pyarrow.lib._CRecordBatchWriter.close File "pyarrow/error.pxi", line 97, in pyarrow.lib.check_status OSError: [Errno 28] Error writing bytes to file. Detail: [errno 28] No space left on device """ The above exception was the direct cause of the following exception:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2591/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2591/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2590
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2590/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2590/comments
https://api.github.com/repos/huggingface/datasets/issues/2590/events
https://github.com/huggingface/datasets/pull/2590
936,954,348
MDExOlB1bGxSZXF1ZXN0NjgzNTg1MDg2
2,590
Add language tags
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,625,481,597,000
1,625,482,728,000
1,625,482,728,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2590", "html_url": "https://github.com/huggingface/datasets/pull/2590", "diff_url": "https://github.com/huggingface/datasets/pull/2590.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2590.patch", "merged_at": 1625482728000 }
This PR adds some missing language tags needed for ASR datasets in #2565
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2590/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2590/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2589
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2589/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2589/comments
https://api.github.com/repos/huggingface/datasets/issues/2589/events
https://github.com/huggingface/datasets/pull/2589
936,825,060
MDExOlB1bGxSZXF1ZXN0NjgzNDc0OTQ0
2,589
Support multilabel metrics
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[ "Hi ! Thanks for the fix :)\r\n\r\nIf I understand correctly, `OptionalSequence` doesn't have an associated arrow type that we know in advance unlike the other feature types, because it depends on the type of the examples.\r\n\r\nFor example, I tested this and it raises an error:\r\n```python\r\nimport datasets as ds\r\nimport pyarrow as pa\r\n\r\nfeatures = ds.Features({\"a\": ds.features.OptionalSequence(ds.Value(\"int32\"))})\r\nbatch = {\"a\": [[0]]}\r\n\r\nwriter = ds.ArrowWriter(features=features, stream=pa.BufferOutputStream())\r\nwriter.write_batch(batch)\r\n# ArrowInvalid: Could not convert [0] with type list: tried to convert to int\r\n```\r\nThis error happens because `features.type` is `StructType(struct<a: int32>)`.\r\n\r\nAnother way to add support for multilabel would be to have several configurations for these metrics. By default it would set the features without sequences, and for the multi label configuration it would use features with sequences. Let me know what you think", "Hi @lhoestq, thanks for your feedback :)\r\n\r\nDefinitely, your suggested approach is simpler. I am going to refactor all my PR unless we could envision some other use cases where an OptionalSequence might be convenient, but for now I can't think of any..." ]
1,625,473,165,000
1,626,099,130,000
1,625,733,615,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2589", "html_url": "https://github.com/huggingface/datasets/pull/2589", "diff_url": "https://github.com/huggingface/datasets/pull/2589.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2589.patch", "merged_at": 1625733615000 }
Currently, multilabel metrics are not supported because `predictions` and `references` are defined as `Value("int32")`. This PR creates a new feature type `OptionalSequence` which can act as either `Value("int32")` or `Sequence(Value("int32"))`, depending on the data passed. Close #2554.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2589/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2589/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2588
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2588/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2588/comments
https://api.github.com/repos/huggingface/datasets/issues/2588/events
https://github.com/huggingface/datasets/pull/2588
936,795,541
MDExOlB1bGxSZXF1ZXN0NjgzNDQ5Njky
2,588
Fix test_is_small_dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[]
1,625,471,186,000
1,626,099,011,000
1,625,591,370,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2588", "html_url": "https://github.com/huggingface/datasets/pull/2588", "diff_url": "https://github.com/huggingface/datasets/pull/2588.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2588.patch", "merged_at": 1625591370000 }
Remove environment variable fixture `env_max_in_memory_dataset_size`. This fixture does not work because env variable is read in datasets.config when first loading datasets, and it is never reread during tests.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2588/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2588/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2587
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2587/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2587/comments
https://api.github.com/repos/huggingface/datasets/issues/2587/events
https://github.com/huggingface/datasets/pull/2587
936,771,339
MDExOlB1bGxSZXF1ZXN0NjgzNDI5NjQy
2,587
Add aiohttp to tests extras require
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,625,469,241,000
1,625,475,878,000
1,625,475,878,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2587", "html_url": "https://github.com/huggingface/datasets/pull/2587", "diff_url": "https://github.com/huggingface/datasets/pull/2587.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2587.patch", "merged_at": 1625475878000 }
Currently, none of the streaming tests are runned within our CI test suite, because the streaming tests require aiohttp and this is missing from our tests extras require dependencies. Our CI test suite should be exhaustive and test all the library functionalities.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2587/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2587/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2586
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2586/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2586/comments
https://api.github.com/repos/huggingface/datasets/issues/2586/events
https://github.com/huggingface/datasets/pull/2586
936,747,588
MDExOlB1bGxSZXF1ZXN0NjgzNDEwMDU3
2,586
Fix misalignment in SQuAD
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[]
1,625,467,340,000
1,626,099,070,000
1,625,663,931,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2586", "html_url": "https://github.com/huggingface/datasets/pull/2586", "diff_url": "https://github.com/huggingface/datasets/pull/2586.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2586.patch", "merged_at": 1625663931000 }
Fix misalignment between: - the answer text and - the answer_start within the context by keeping original leading blank spaces in the context. Fix #2585.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2586/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2586/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2585
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2585/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2585/comments
https://api.github.com/repos/huggingface/datasets/issues/2585/events
https://github.com/huggingface/datasets/issues/2585
936,484,419
MDU6SXNzdWU5MzY0ODQ0MTk=
2,585
sqaud_v2 dataset contains misalignment between the answer text and the context value at the answer index
{ "login": "mmajurski", "id": 9354454, "node_id": "MDQ6VXNlcjkzNTQ0NTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9354454?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mmajurski", "html_url": "https://github.com/mmajurski", "followers_url": "https://api.github.com/users/mmajurski/followers", "following_url": "https://api.github.com/users/mmajurski/following{/other_user}", "gists_url": "https://api.github.com/users/mmajurski/gists{/gist_id}", "starred_url": "https://api.github.com/users/mmajurski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mmajurski/subscriptions", "organizations_url": "https://api.github.com/users/mmajurski/orgs", "repos_url": "https://api.github.com/users/mmajurski/repos", "events_url": "https://api.github.com/users/mmajurski/events{/privacy}", "received_events_url": "https://api.github.com/users/mmajurski/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @mmajurski, thanks for reporting this issue.\r\n\r\nIndeed this misalignment arises because the source dataset context field contains leading blank spaces (and these are counted within the answer_start), while our datasets loading script removes these leading blank spaces.\r\n\r\nI'm going to fix our script so that all leading blank spaces in the source dataset are kept, and there is no misalignment between the answer text and the answer_start within the context.", "If you are going to be altering the data cleaning from the source Squad dataset, here is one thing to consider.\r\nThere are occasional double spaces separating words which it might be nice to get rid of. \r\n\r\nEither way, thank you." ]
1,625,413,189,000
1,625,663,931,000
1,625,663,931,000
NONE
null
null
null
## Describe the bug The built in huggingface squad_v2 dataset that you can access via datasets.load_dataset contains mis-alignment between the answers['text'] and the characters in the context at the location specified by answers['answer_start']. For example: id = '56d1f453e7d4791d009025bd' answers = {'text': ['Pure Land'], 'answer_start': [146]} However the actual text in context at location 146 is 'ure Land,' Which is an off-by-one error from the correct answer. ## Steps to reproduce the bug ```python import datasets def check_context_answer_alignment(example): for a_idx in range(len(example['answers']['text'])): # check raw dataset for answer consistency between context and answer answer_text = example['answers']['text'][a_idx] a_st_idx = example['answers']['answer_start'][a_idx] a_end_idx = a_st_idx + len(example['answers']['text'][a_idx]) answer_text_from_context = example['context'][a_st_idx:a_end_idx] if answer_text != answer_text_from_context: #print(example['id']) return False return True dataset = datasets.load_dataset('squad_v2', split='train', keep_in_memory=True) start_len = len(dataset) dataset = dataset.filter(check_context_answer_alignment, num_proc=1, keep_in_memory=True) end_len = len(dataset) print('{} instances contain mis-alignment between the answer text and answer index.'.format(start_len - end_len)) ``` ## Expected results This code should result in 0 rows being filtered out from the dataset. ## Actual results This filter command results in 258 rows being flagged as containing a discrepancy between the text contained within answers['text'] and the text in example['context'] at the answers['answer_start'] location. This code will reproduce the problem and produce the following count: "258 instances contain mis-alignment between the answer text and answer index." ## Environment info Steps to rebuilt the Conda environment: ``` # create a virtual environment to stuff all these packages into conda create -n round8 python=3.8 -y # activate the virtual environment conda activate round8 # install pytorch (best done through conda to handle cuda dependencies) conda install pytorch torchvision torchtext cudatoolkit=11.1 -c pytorch-lts -c nvidia pip install jsonpickle transformers datasets matplotlib ``` OS: Ubuntu 20.04 Python 3.8 Result of `conda env export`: ``` name: round8 channels: - pytorch-lts - nvidia - defaults dependencies: - _libgcc_mutex=0.1=main - _openmp_mutex=4.5=1_gnu - blas=1.0=mkl - brotlipy=0.7.0=py38h27cfd23_1003 - bzip2=1.0.8=h7b6447c_0 - ca-certificates=2021.5.25=h06a4308_1 - certifi=2021.5.30=py38h06a4308_0 - cffi=1.14.5=py38h261ae71_0 - chardet=4.0.0=py38h06a4308_1003 - cryptography=3.4.7=py38hd23ed53_0 - cudatoolkit=11.1.74=h6bb024c_0 - ffmpeg=4.2.2=h20bf706_0 - freetype=2.10.4=h5ab3b9f_0 - gmp=6.2.1=h2531618_2 - gnutls=3.6.15=he1e5248_0 - idna=2.10=pyhd3eb1b0_0 - intel-openmp=2021.2.0=h06a4308_610 - jpeg=9b=h024ee3a_2 - lame=3.100=h7b6447c_0 - lcms2=2.12=h3be6417_0 - ld_impl_linux-64=2.35.1=h7274673_9 - libffi=3.3=he6710b0_2 - libgcc-ng=9.3.0=h5101ec6_17 - libgomp=9.3.0=h5101ec6_17 - libidn2=2.3.1=h27cfd23_0 - libopus=1.3.1=h7b6447c_0 - libpng=1.6.37=hbc83047_0 - libstdcxx-ng=9.3.0=hd4cf53a_17 - libtasn1=4.16.0=h27cfd23_0 - libtiff=4.2.0=h85742a9_0 - libunistring=0.9.10=h27cfd23_0 - libuv=1.40.0=h7b6447c_0 - libvpx=1.7.0=h439df22_0 - libwebp-base=1.2.0=h27cfd23_0 - lz4-c=1.9.3=h2531618_0 - mkl=2021.2.0=h06a4308_296 - mkl-service=2.3.0=py38h27cfd23_1 - mkl_fft=1.3.0=py38h42c9631_2 - mkl_random=1.2.1=py38ha9443f7_2 - ncurses=6.2=he6710b0_1 - nettle=3.7.3=hbbd107a_1 - ninja=1.10.2=hff7bd54_1 - numpy=1.20.2=py38h2d18471_0 - numpy-base=1.20.2=py38hfae3a4d_0 - olefile=0.46=py_0 - openh264=2.1.0=hd408876_0 - openssl=1.1.1k=h27cfd23_0 - pillow=8.2.0=py38he98fc37_0 - pip=21.1.2=py38h06a4308_0 - pycparser=2.20=py_2 - pyopenssl=20.0.1=pyhd3eb1b0_1 - pysocks=1.7.1=py38h06a4308_0 - python=3.8.10=h12debd9_8 - pytorch=1.8.1=py3.8_cuda11.1_cudnn8.0.5_0 - readline=8.1=h27cfd23_0 - requests=2.25.1=pyhd3eb1b0_0 - setuptools=52.0.0=py38h06a4308_0 - six=1.16.0=pyhd3eb1b0_0 - sqlite=3.35.4=hdfb4753_0 - tk=8.6.10=hbc83047_0 - torchtext=0.9.1=py38 - torchvision=0.9.1=py38_cu111 - typing_extensions=3.7.4.3=pyha847dfd_0 - urllib3=1.26.4=pyhd3eb1b0_0 - wheel=0.36.2=pyhd3eb1b0_0 - x264=1!157.20191217=h7b6447c_0 - xz=5.2.5=h7b6447c_0 - zlib=1.2.11=h7b6447c_3 - zstd=1.4.9=haebb681_0 - pip: - click==8.0.1 - cycler==0.10.0 - datasets==1.8.0 - dill==0.3.4 - filelock==3.0.12 - fsspec==2021.6.0 - huggingface-hub==0.0.8 - joblib==1.0.1 - jsonpickle==2.0.0 - kiwisolver==1.3.1 - matplotlib==3.4.2 - multiprocess==0.70.12.2 - packaging==20.9 - pandas==1.2.4 - pyarrow==3.0.0 - pyparsing==2.4.7 - python-dateutil==2.8.1 - pytz==2021.1 - regex==2021.4.4 - sacremoses==0.0.45 - tokenizers==0.10.3 - tqdm==4.49.0 - transformers==4.6.1 - xxhash==2.0.2 prefix: /home/mmajurski/anaconda3/envs/round8 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2585/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2585/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2584
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2584/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2584/comments
https://api.github.com/repos/huggingface/datasets/issues/2584/events
https://github.com/huggingface/datasets/pull/2584
936,049,736
MDExOlB1bGxSZXF1ZXN0NjgyODY2Njc1
2,584
wi_locness: reference latest leaderboard on codalab
{ "login": "aseifert", "id": 4944799, "node_id": "MDQ6VXNlcjQ5NDQ3OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/4944799?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aseifert", "html_url": "https://github.com/aseifert", "followers_url": "https://api.github.com/users/aseifert/followers", "following_url": "https://api.github.com/users/aseifert/following{/other_user}", "gists_url": "https://api.github.com/users/aseifert/gists{/gist_id}", "starred_url": "https://api.github.com/users/aseifert/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aseifert/subscriptions", "organizations_url": "https://api.github.com/users/aseifert/orgs", "repos_url": "https://api.github.com/users/aseifert/repos", "events_url": "https://api.github.com/users/aseifert/events{/privacy}", "received_events_url": "https://api.github.com/users/aseifert/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,625,257,582,000
1,625,475,974,000
1,625,475,974,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2584", "html_url": "https://github.com/huggingface/datasets/pull/2584", "diff_url": "https://github.com/huggingface/datasets/pull/2584.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2584.patch", "merged_at": 1625475974000 }
The dataset's author asked me to put this codalab link into the dataset's README.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2584/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2584/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2583
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2583/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2583/comments
https://api.github.com/repos/huggingface/datasets/issues/2583/events
https://github.com/huggingface/datasets/issues/2583
936,034,976
MDU6SXNzdWU5MzYwMzQ5NzY=
2,583
Error iteration over IterableDataset using Torch DataLoader
{ "login": "LeenaShekhar", "id": 12227436, "node_id": "MDQ6VXNlcjEyMjI3NDM2", "avatar_url": "https://avatars.githubusercontent.com/u/12227436?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LeenaShekhar", "html_url": "https://github.com/LeenaShekhar", "followers_url": "https://api.github.com/users/LeenaShekhar/followers", "following_url": "https://api.github.com/users/LeenaShekhar/following{/other_user}", "gists_url": "https://api.github.com/users/LeenaShekhar/gists{/gist_id}", "starred_url": "https://api.github.com/users/LeenaShekhar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LeenaShekhar/subscriptions", "organizations_url": "https://api.github.com/users/LeenaShekhar/orgs", "repos_url": "https://api.github.com/users/LeenaShekhar/repos", "events_url": "https://api.github.com/users/LeenaShekhar/events{/privacy}", "received_events_url": "https://api.github.com/users/LeenaShekhar/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! This is because you first need to format the dataset for pytorch:\r\n\r\n```python\r\n>>> import torch\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset('oscar', \"unshuffled_deduplicated_en\", split='train', streaming=True)\r\n>>> torch_iterable_dataset = dataset.with_format(\"torch\")\r\n>>> assert isinstance(torch_iterable_dataset, torch.utils.data.IterableDataset)\r\n>>> dataloader = torch.utils.data.DataLoader(torch_iterable_dataset, batch_size=4)\r\n>>> next(iter(dataloader))\r\n{'id': tensor([0, 1, 2, 3]), 'text': ['Mtendere Village was inspired...]}\r\n```\r\n\r\nThis is because the pytorch dataloader expects a subclass of `torch.utils.data.IterableDataset`. Since you can't pass an arbitrary iterable to a pytorch dataloader, you first need to build an object that inherits from `torch.utils.data.IterableDataset` using `with_format(\"torch\")` for example.\r\n", "Thank you for that and the example! \r\n\r\nWhat you said makes total sense; I just somehow missed that and assumed HF IterableDataset was a subclass of Torch IterableDataset. " ]
1,625,255,758,000
1,626,771,885,000
1,625,528,903,000
NONE
null
null
null
## Describe the bug I have an IterableDataset (created using streaming=True) and I am trying to create batches using Torch DataLoader class by passing this IterableDataset to it. This throws error which is pasted below. I can do the same by using Torch IterableDataset. One thing I noticed is that in the former case when I look at the dataloader.sampler class I get torch.utils.data.sampler.SequentialSampler while the latter one gives torch.utils.data.dataloader._InfiniteConstantSampler. I am not sure if this is how it is meant to be used, but that's what seemed reasonable to me. ## Steps to reproduce the bug 1. Does not work. ```python >>> from datasets import load_dataset >>> dataset = load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True) >>> dataloader = torch.utils.data.DataLoader(dataset, batch_size=4) >>> dataloader.sampler <torch.utils.data.sampler.SequentialSampler object at 0x7f245a510208> >>> for batch in dataloader: ... print(batch) ``` 2. Works. ```python import torch from torch.utils.data import Dataset, IterableDataset, DataLoader class CustomIterableDataset(IterableDataset): 'Characterizes a dataset for PyTorch' def __init__(self, data): 'Initialization' self.data = data def __iter__(self): return iter(self.data) data = list(range(12)) dataset = CustomIterableDataset(data) dataloader = DataLoader(dataset, batch_size=4) print("dataloader: ", dataloader.sampler) for batch in dataloader: print(batch) ``` ## Expected results To get batches of data with the batch size as 4. Output from the latter one (2) though Datasource is different here so actual data is different. dataloader: <torch.utils.data.dataloader._InfiniteConstantSampler object at 0x7f1cc29e2c50> tensor([0, 1, 2, 3]) tensor([4, 5, 6, 7]) tensor([ 8, 9, 10, 11]) ## Actual results <torch.utils.data.sampler.SequentialSampler object at 0x7f245a510208> ... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 435, in __next__ data = self._next_data() File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 474, in _next_data index = self._next_index() # may raise StopIteration File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 427, in _next_index return next(self._sampler_iter) # may raise StopIteration File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 227, in __iter__ for idx in self.sampler: File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 67, in __iter__ return iter(range(len(self.data_source))) TypeError: object of type 'IterableDataset' has no len() ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: '1.8.1.dev0' - Platform: Linux - Python version: Python 3.6.8 - PyArrow version: '3.0.0'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2583/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2583/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2582
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2582/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2582/comments
https://api.github.com/repos/huggingface/datasets/issues/2582/events
https://github.com/huggingface/datasets/pull/2582
935,859,104
MDExOlB1bGxSZXF1ZXN0NjgyNzAzNzg3
2,582
Add skip and take
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq looks good. I tried with https://huggingface.co/datasets/vblagoje/wikipedia_snippets_streamed and it worked nicely. I would add more unit tests for edge cases. What happens if the n is larger than the total number of samples? Just to make sure these cases are handled properly. ", "Yup I'll add the tests thanks ;)\r\n\r\nMoreover, I just noticed something in your wiki snippets code. FYI you're using `++passage_counter ` at https://huggingface.co/datasets/vblagoje/wikipedia_snippets_streamed/blob/main/wikipedia_snippets_streamed.py#L102 but in python this doesn't increment the value @vblagoje ", "Thanks @lhoestq - not easy to convert after 10+ years of Java" ]
1,625,238,619,000
1,625,501,200,000
1,625,501,199,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2582", "html_url": "https://github.com/huggingface/datasets/pull/2582", "diff_url": "https://github.com/huggingface/datasets/pull/2582.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2582.patch", "merged_at": 1625501199000 }
As discussed in https://github.com/huggingface/datasets/pull/2375#discussion_r657084544 I added the `IterableDataset.skip` and `IterableDataset.take` methods that allows to do basic splitting of iterable datasets. You can create new dataset with the first `n` examples using `IterableDataset.take()`, or you can get a dataset with the rest of the examples by skipping the first `n` examples with `IterableDataset.skip()` One implementation detail: Using `take` (or `skip`) prevents future dataset shuffling from shuffling the dataset shards, otherwise the taken examples could come from other shards. In this case it only uses the shuffle buffer. I would have loved to allow the shards of the taken examples to be shuffled anyway, but since we don't know in advance the length of each shard we don't know what shards to take or skip. I think this is ok though since users can shuffle before doing take or skip. I mentioned this in the documentation cc @vblagoje @lewtun
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2582/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2582/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2581
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2581/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2581/comments
https://api.github.com/repos/huggingface/datasets/issues/2581/events
https://github.com/huggingface/datasets/pull/2581
935,783,588
MDExOlB1bGxSZXF1ZXN0NjgyNjQwMDY4
2,581
Faster search_batch for ElasticsearchIndex due to threading
{ "login": "mwrzalik", "id": 1376337, "node_id": "MDQ6VXNlcjEzNzYzMzc=", "avatar_url": "https://avatars.githubusercontent.com/u/1376337?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mwrzalik", "html_url": "https://github.com/mwrzalik", "followers_url": "https://api.github.com/users/mwrzalik/followers", "following_url": "https://api.github.com/users/mwrzalik/following{/other_user}", "gists_url": "https://api.github.com/users/mwrzalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/mwrzalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mwrzalik/subscriptions", "organizations_url": "https://api.github.com/users/mwrzalik/orgs", "repos_url": "https://api.github.com/users/mwrzalik/repos", "events_url": "https://api.github.com/users/mwrzalik/events{/privacy}", "received_events_url": "https://api.github.com/users/mwrzalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[]
1,625,233,327,000
1,626,099,226,000
1,626,083,571,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2581", "html_url": "https://github.com/huggingface/datasets/pull/2581", "diff_url": "https://github.com/huggingface/datasets/pull/2581.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2581.patch", "merged_at": 1626083571000 }
Hey, I think it makes sense to perform search_batch threaded, so ES can perform search in parallel. Cheers!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2581/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2581/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2580
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2580/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2580/comments
https://api.github.com/repos/huggingface/datasets/issues/2580/events
https://github.com/huggingface/datasets/pull/2580
935,767,421
MDExOlB1bGxSZXF1ZXN0NjgyNjI2MTkz
2,580
Fix Counter import
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,625,232,108,000
1,625,236,667,000
1,625,236,666,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2580", "html_url": "https://github.com/huggingface/datasets/pull/2580", "diff_url": "https://github.com/huggingface/datasets/pull/2580.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2580.patch", "merged_at": 1625236666000 }
Import from `collections` instead of `typing`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2580/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2580/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2579
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2579/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2579/comments
https://api.github.com/repos/huggingface/datasets/issues/2579/events
https://github.com/huggingface/datasets/pull/2579
935,486,894
MDExOlB1bGxSZXF1ZXN0NjgyMzkyNjYx
2,579
Fix BibTeX entry
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,625,209,840,000
1,625,211,224,000
1,625,211,224,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2579", "html_url": "https://github.com/huggingface/datasets/pull/2579", "diff_url": "https://github.com/huggingface/datasets/pull/2579.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2579.patch", "merged_at": 1625211224000 }
Add missing contributor to BibTeX entry. cc: @abhishekkrthakur @thomwolf
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2579/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2579/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2578
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2578/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2578/comments
https://api.github.com/repos/huggingface/datasets/issues/2578/events
https://github.com/huggingface/datasets/pull/2578
935,187,497
MDExOlB1bGxSZXF1ZXN0NjgyMTQ0OTY2
2,578
Support Zstandard compressed files
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> What if people want to run some tests without having zstandard ?\r\n> Usually what we do is add a decorator @require_zstandard for example\r\n\r\n@lhoestq I think I'm missing something here...\r\n\r\nTests are a *development* tool (to ensure we deliver a good quality lib), not something we offer to the end users of the lib. Users of the lib just `pip install datasets` and no tests are delivered with the lib (`tests` directory is outside the `src` code dir). \r\n\r\nOn the contrary, developers (contributors) of the lib do need to be able to run tests (TDD). And because of that, they are required to install datasets differently: `pip install -e .[dev]`, so that all required developing (and testing) dependencies are properly installed (included `zstandard`).\r\n\r\nApart from `zsatandard`, there are many other dev/test required dependencies for running tests, and we do not have a `@require_toto` for each and every of these dependencies in our tests: \r\n- `pytest` and `absl-py` (they are not dependencies in install_requires, but only in TEST_REQUIRE extras_require), \r\n- `boto3` (in test_filesystem.py), \r\n- `seqeval` (in test_metric_common.py), \r\n- `bs4` (used by eli5 and tested in test_hf_gcp.py)\r\n- ...\r\n\r\nSo IMHO, to run tests you should previously install datasets with dev or tests dependencies: either `pip install -e .[dev]` or `pip install -e .[tests]` (the latter to be used in CI testing-only part of the development cycle). And the tests should be written accordingly, assuming all tests dependencies are installed.", "Hi !\r\nI was saying that because the other dependencies you mentioned are only required for _some_ tests. While here zstd is required for _all_ tests since it's imported in the conftest.py\r\nFeel free to keep it as it is right now, or maybe move the fixture to test_file_utils.py to allow users without zstd to run tests for their builders, dataset card etc. without issues", "Thank you ! I think we can merge now", "@lhoestq does this mean that the pile could have streaming support in the future? Afaik streaming doesnt support zstandard compressed type", "> @lhoestq does this mean that the pile could have streaming support in the future? Afaik streaming doesnt support zstandard compressed type\r\n\r\njust for reference, i tried to stream one of the `.zst` files from [the pile](https://the-eye.eu/public/AI/pile/) using\r\n\r\n```python\r\ndata_files = [\"https://the-eye.eu/public/AI/pile/train/00.jsonl.zst\"]\r\nstreamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True)\r\n```\r\n\r\nand got the following error:\r\n\r\n```\r\nUsing custom data configuration default-4e71acadc389c254\r\n---------------------------------------------------------------------------\r\nNotImplementedError Traceback (most recent call last)\r\n/tmp/ipykernel_1187680/10848115.py in <module>\r\n 1 data_files = [\"https://the-eye.eu/public/AI/pile/train/00.jsonl.zst\"]\r\n 2 \r\n----> 3 streamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True)\r\n 4 \r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)\r\n 835 # this extends the open and os.path.join functions for data streaming\r\n 836 extend_module_for_streaming(builder_instance.__module__, use_auth_token=use_auth_token)\r\n--> 837 return builder_instance.as_streaming_dataset(\r\n 838 split=split,\r\n 839 use_auth_token=use_auth_token,\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/builder.py in as_streaming_dataset(self, split, base_path, use_auth_token)\r\n 922 data_dir=self.config.data_dir,\r\n 923 )\r\n--> 924 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n 925 # By default, return all splits\r\n 926 if split is None:\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/packaged_modules/json/json.py in _split_generators(self, dl_manager)\r\n 50 if not self.config.data_files:\r\n 51 raise ValueError(f\"At least one data file must be specified, but got data_files={self.config.data_files}\")\r\n---> 52 data_files = dl_manager.download_and_extract(self.config.data_files)\r\n 53 if isinstance(data_files, (str, list, tuple)):\r\n 54 files = data_files\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in download_and_extract(self, url_or_urls)\r\n 140 \r\n 141 def download_and_extract(self, url_or_urls):\r\n--> 142 return self.extract(self.download(url_or_urls))\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in extract(self, path_or_paths)\r\n 115 \r\n 116 def extract(self, path_or_paths):\r\n--> 117 urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)\r\n 118 return urlpaths\r\n 119 \r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)\r\n 202 num_proc = 1\r\n 203 if num_proc <= 1 or len(iterable) <= num_proc:\r\n--> 204 mapped = [\r\n 205 _single_map_nested((function, obj, types, None, True))\r\n 206 for obj in utils.tqdm(iterable, disable=disable_tqdm)\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0)\r\n 203 if num_proc <= 1 or len(iterable) <= num_proc:\r\n 204 mapped = [\r\n--> 205 _single_map_nested((function, obj, types, None, True))\r\n 206 for obj in utils.tqdm(iterable, disable=disable_tqdm)\r\n 207 ]\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args)\r\n 141 # Singleton first to spare some computation\r\n 142 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 143 return function(data_struct)\r\n 144 \r\n 145 # Reduce logging to keep things readable in multiprocessing with tqdm\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _extract(self, urlpath)\r\n 119 \r\n 120 def _extract(self, urlpath):\r\n--> 121 protocol = self._get_extraction_protocol(urlpath)\r\n 122 if protocol is None:\r\n 123 # no extraction\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol(self, urlpath)\r\n 137 elif path.endswith(\".zip\"):\r\n 138 return \"zip\"\r\n--> 139 raise NotImplementedError(f\"Extraction protocol for file at {urlpath} is not implemented yet\")\r\n 140 \r\n 141 def download_and_extract(self, url_or_urls):\r\n\r\nNotImplementedError: Extraction protocol for file at https://the-eye.eu/public/AI/pile/train/00.jsonl.zst is not implemented yet\r\n```\r\n\r\ni'm not sure whether @Shashi456 is referring to a fundamental limitation with \"streaming\" zstandard compression files or simply that we need to support the protocol in the streaming api of `datasets`\r\n\r\n", "@lewtun our streaming mode patches the Python `open` function. I could have a look tomorrow if it is easily implementable for this case.", "@lewtun, I have tested and yes, it is easily implementable. I've created a draft Pull Request with an implementation proposal: #2786.", "thanks a lot @albertvillanova - now i can stream the pile :)" ]
1,625,170,954,000
1,628,693,184,000
1,625,482,227,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2578", "html_url": "https://github.com/huggingface/datasets/pull/2578", "diff_url": "https://github.com/huggingface/datasets/pull/2578.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2578.patch", "merged_at": 1625482227000 }
Close #2572. cc: @thomwolf
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2578/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2578/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2576
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2576/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2576/comments
https://api.github.com/repos/huggingface/datasets/issues/2576/events
https://github.com/huggingface/datasets/pull/2576
934,986,761
MDExOlB1bGxSZXF1ZXN0NjgxOTc5MTA1
2,576
Add mC4
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,625,154,685,000
1,625,237,456,000
1,625,237,455,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2576", "html_url": "https://github.com/huggingface/datasets/pull/2576", "diff_url": "https://github.com/huggingface/datasets/pull/2576.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2576.patch", "merged_at": 1625237455000 }
AllenAI is now hosting the processed C4 and mC4 dataset in this repo: https://huggingface.co/datasets/allenai/c4 Thanks a lot to them ! In this PR I added the mC4 dataset builder. It supports 108 languages You can load it with ```python from datasets import load_dataset en_mc4 = load_dataset("mc4", "en") fr_mc4 = load_dataset("mc4", "fr") en_and_fr_mc4 = load_dataset("mc4", languages=["en", "fr"]) ``` It also supports streaming, if you don't want to download hundreds of GB of data: ```python en_mc4 = load_dataset("mc4", "en", streaming=True) ``` Regarding the dataset_infos.json, I will add them once I have them. Also we can work on the dataset card at that will be at https://huggingface.co/datasets/mc4 For now I just added a link to https://huggingface.co/datasets/allenai/c4 as well as a few sections
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2576/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2576/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2575
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2575/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2575/comments
https://api.github.com/repos/huggingface/datasets/issues/2575/events
https://github.com/huggingface/datasets/pull/2575
934,876,496
MDExOlB1bGxSZXF1ZXN0NjgxODg0OTgy
2,575
Add C4
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,625,147,888,000
1,625,237,423,000
1,625,237,423,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2575", "html_url": "https://github.com/huggingface/datasets/pull/2575", "diff_url": "https://github.com/huggingface/datasets/pull/2575.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2575.patch", "merged_at": 1625237423000 }
The old code for the C4 dataset was to generate the C4 with Apache Beam, as in Tensorflow Datasets. However AllenAI is now hosting the processed C4 dataset in this repo: https://huggingface.co/datasets/allenai/c4 Thanks a lot to them for their amazing work ! In this PR I changed the script to download and prepare the data directly from this repo. It has 4 variants: en, en.noblocklist, en.noclean, realnewslike You can load it with ```python from datasets import load_dataset c4 = load_dataset("c4", "en") ``` It also supports streaming, if you don't want to download hundreds of GB of data: ```python c4 = load_dataset("c4", "en", streaming=True) ``` Regarding the dataset_infos.json, I haven't added the infos for en.noclean. I will add them once I have them. Also we can work on the dataset card at https://huggingface.co/datasets/c4 For now I just added a link to https://huggingface.co/datasets/allenai/c4 as well as a few sections
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2575/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2575/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2574
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2574/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2574/comments
https://api.github.com/repos/huggingface/datasets/issues/2574/events
https://github.com/huggingface/datasets/pull/2574
934,632,378
MDExOlB1bGxSZXF1ZXN0NjgxNjczMzYy
2,574
Add streaming in load a dataset docs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,625,131,973,000
1,625,148,742,000
1,625,148,741,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2574", "html_url": "https://github.com/huggingface/datasets/pull/2574", "diff_url": "https://github.com/huggingface/datasets/pull/2574.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2574.patch", "merged_at": 1625148741000 }
Mention dataset streaming on the "loading a dataset" page of the documentation
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2574/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2574/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2573
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2573/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2573/comments
https://api.github.com/repos/huggingface/datasets/issues/2573/events
https://github.com/huggingface/datasets/issues/2573
934,584,745
MDU6SXNzdWU5MzQ1ODQ3NDU=
2,573
Finding right block-size with JSON loading difficult for user
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "This was actually a second error arising from a too small block-size in the json reader.\r\n\r\nFinding the right block size is difficult for the layman user" ]
1,625,129,315,000
1,625,166,653,000
null
MEMBER
null
null
null
As reported by @thomwolf, while loading a JSON Lines file with "json" loading script, he gets > json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 383)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2573/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2573/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2572
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2572/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2572/comments
https://api.github.com/repos/huggingface/datasets/issues/2572/events
https://github.com/huggingface/datasets/issues/2572
934,573,767
MDU6SXNzdWU5MzQ1NzM3Njc=
2,572
Support Zstandard compressed files
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,625,128,624,000
1,625,482,227,000
1,625,482,227,000
MEMBER
null
null
null
Add support for Zstandard compressed files: https://facebook.github.io/zstd/
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2572/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2572/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2571
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2571/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2571/comments
https://api.github.com/repos/huggingface/datasets/issues/2571/events
https://github.com/huggingface/datasets/pull/2571
933,791,018
MDExOlB1bGxSZXF1ZXN0NjgwOTQ2NzQ1
2,571
Filter expected warning log from transformers
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I think the failing test has nothing to do with my PR..." ]
1,625,064,499,000
1,625,198,897,000
1,625,198,897,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2571", "html_url": "https://github.com/huggingface/datasets/pull/2571", "diff_url": "https://github.com/huggingface/datasets/pull/2571.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2571.patch", "merged_at": 1625198896000 }
Close #2569.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2571/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2571/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2570
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2570/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2570/comments
https://api.github.com/repos/huggingface/datasets/issues/2570/events
https://github.com/huggingface/datasets/pull/2570
933,402,521
MDExOlB1bGxSZXF1ZXN0NjgwNjEzNzc0
2,570
Minor fix docs format for bertscore
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,625,038,932,000
1,625,067,061,000
1,625,067,061,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2570", "html_url": "https://github.com/huggingface/datasets/pull/2570", "diff_url": "https://github.com/huggingface/datasets/pull/2570.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2570.patch", "merged_at": 1625067061000 }
Minor fix docs format for bertscore: - link to README - format of KWARGS_DESCRIPTION
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2570/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2570/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2569
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2569/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2569/comments
https://api.github.com/repos/huggingface/datasets/issues/2569/events
https://github.com/huggingface/datasets/issues/2569
933,015,797
MDU6SXNzdWU5MzMwMTU3OTc=
2,569
Weights of model checkpoint not initialized for RobertaModel for Bertscore
{ "login": "suzyahyah", "id": 2980993, "node_id": "MDQ6VXNlcjI5ODA5OTM=", "avatar_url": "https://avatars.githubusercontent.com/u/2980993?v=4", "gravatar_id": "", "url": "https://api.github.com/users/suzyahyah", "html_url": "https://github.com/suzyahyah", "followers_url": "https://api.github.com/users/suzyahyah/followers", "following_url": "https://api.github.com/users/suzyahyah/following{/other_user}", "gists_url": "https://api.github.com/users/suzyahyah/gists{/gist_id}", "starred_url": "https://api.github.com/users/suzyahyah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/suzyahyah/subscriptions", "organizations_url": "https://api.github.com/users/suzyahyah/orgs", "repos_url": "https://api.github.com/users/suzyahyah/repos", "events_url": "https://api.github.com/users/suzyahyah/events{/privacy}", "received_events_url": "https://api.github.com/users/suzyahyah/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @suzyahyah, thanks for reporting.\r\n\r\nThe message you get is indeed not an error message, but a warning coming from Hugging Face `transformers`. The complete warning message is:\r\n```\r\nSome weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.bias', 'lm_head.layer_norm.weight']\r\n- This IS expected if you are initializing RobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing RobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\n```\r\n\r\nIn this case, this behavior IS expected and you can safely ignore the warning message.\r\n\r\nThe reason is that you are just using RoBERTa to get the contextual embeddings of the input sentences/tokens, thus leaving away its head layer, whose weights are ignored.\r\n\r\nFeel free to reopen this issue if you need further explanations.", "Hi @suzyahyah, I have created a Pull Request to filter out that warning message in this specific case, since the behavior is as expected and the warning message can only cause confusion for users (as in your case)." ]
1,624,992,923,000
1,625,123,339,000
1,625,038,549,000
NONE
null
null
null
When applying bertscore out of the box, ```Some weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight']``` Following the typical usage from https://huggingface.co/docs/datasets/loading_metrics.html ``` from datasets import load_metric metric = load_metric('bertscore') # Example of typical usage for batch in dataset: inputs, references = batch predictions = model(inputs) metric.add_batch(predictions=predictions, references=references) score = metric.compute(lang="en") #score = metric.compute(model_type="roberta-large") # gives the same error ``` I am concerned about this because my usage shouldn't require any further fine-tuning and most people would expect to use BertScore out of the box? I realised the huggingface code is a wrapper around https://github.com/Tiiiger/bert_score, but I think this repo is anyway relying on the model code and weights from huggingface repo.... ## Environment info - `datasets` version: 1.7.0 - Platform: Linux-5.4.0-1041-aws-x86_64-with-glibc2.27 - Python version: 3.9.5 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2569/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2569/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2568
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2568/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2568/comments
https://api.github.com/repos/huggingface/datasets/issues/2568/events
https://github.com/huggingface/datasets/pull/2568
932,934,795
MDExOlB1bGxSZXF1ZXN0NjgwMjE5MDU2
2,568
Add interleave_datasets for map-style datasets
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,624,987,164,000
1,625,132,014,000
1,625,132,013,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2568", "html_url": "https://github.com/huggingface/datasets/pull/2568", "diff_url": "https://github.com/huggingface/datasets/pull/2568.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2568.patch", "merged_at": 1625132012000 }
### Add interleave_datasets for map-style datasets Add support for map-style datasets (i.e. `Dataset` objects) in `interleave_datasets`. It was only supporting iterable datasets (i.e. `IterableDataset` objects). ### Implementation details It works by concatenating the datasets and then re-order the indices to make the new dataset. ### TODO - [x] tests - [x] docs Close #2563
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2568/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2568/timeline
null
true