url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.13B
node_id
stringlengths
18
32
number
int64
1
3.71k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
int64
0
42
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
3 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
https://api.github.com/repos/huggingface/datasets/issues/3712
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3712/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3712/comments
https://api.github.com/repos/huggingface/datasets/issues/3712/events
https://github.com/huggingface/datasets/pull/3712
1,134,252,505
PR_kwDODunzps4ynVYy
3,712
Fix the error of msr_sqa dataset
{ "login": "Timothyxxx", "id": 47296835, "node_id": "MDQ6VXNlcjQ3Mjk2ODM1", "avatar_url": "https://avatars.githubusercontent.com/u/47296835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Timothyxxx", "html_url": "https://github.com/Timothyxxx", "followers_url": "https://api.github.com/users/Timothyxxx/followers", "following_url": "https://api.github.com/users/Timothyxxx/following{/other_user}", "gists_url": "https://api.github.com/users/Timothyxxx/gists{/gist_id}", "starred_url": "https://api.github.com/users/Timothyxxx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Timothyxxx/subscriptions", "organizations_url": "https://api.github.com/users/Timothyxxx/orgs", "repos_url": "https://api.github.com/users/Timothyxxx/repos", "events_url": "https://api.github.com/users/Timothyxxx/events{/privacy}", "received_events_url": "https://api.github.com/users/Timothyxxx/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2022-02-12T16:27:54
2022-02-12T16:27:54
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3712", "html_url": "https://github.com/huggingface/datasets/pull/3712", "diff_url": "https://github.com/huggingface/datasets/pull/3712.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3712.patch", "merged_at": null }
Fix the error of _load_table_data function in msr_sqa dataset, it is wrong to use comma to split each row.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3712/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3712/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3711
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3711/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3711/comments
https://api.github.com/repos/huggingface/datasets/issues/3711/events
https://github.com/huggingface/datasets/pull/3711
1,134,050,545
PR_kwDODunzps4ymmlK
3,711
Fix the error of _load_table_data function in msr_sqa dataset
{ "login": "Timothyxxx", "id": 47296835, "node_id": "MDQ6VXNlcjQ3Mjk2ODM1", "avatar_url": "https://avatars.githubusercontent.com/u/47296835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Timothyxxx", "html_url": "https://github.com/Timothyxxx", "followers_url": "https://api.github.com/users/Timothyxxx/followers", "following_url": "https://api.github.com/users/Timothyxxx/following{/other_user}", "gists_url": "https://api.github.com/users/Timothyxxx/gists{/gist_id}", "starred_url": "https://api.github.com/users/Timothyxxx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Timothyxxx/subscriptions", "organizations_url": "https://api.github.com/users/Timothyxxx/orgs", "repos_url": "https://api.github.com/users/Timothyxxx/repos", "events_url": "https://api.github.com/users/Timothyxxx/events{/privacy}", "received_events_url": "https://api.github.com/users/Timothyxxx/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2022-02-12T13:20:53
2022-02-12T13:30:43
2022-02-12T13:30:43
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3711", "html_url": "https://github.com/huggingface/datasets/pull/3711", "diff_url": "https://github.com/huggingface/datasets/pull/3711.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3711.patch", "merged_at": null }
The _load_table_data function from the last version is wrong, it is wrong to use comma to split each row.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3711/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3711/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3710
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3710/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3710/comments
https://api.github.com/repos/huggingface/datasets/issues/3710/events
https://github.com/huggingface/datasets/pull/3710
1,133,955,393
PR_kwDODunzps4ymQMQ
3,710
Fix CI code quality issue
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2022-02-12T12:05:39
2022-02-12T12:58:05
2022-02-12T12:58:04
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3710", "html_url": "https://github.com/huggingface/datasets/pull/3710", "diff_url": "https://github.com/huggingface/datasets/pull/3710.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3710.patch", "merged_at": "2022-02-12T12:58:04" }
Fix CI code quality issue introduced by #3695.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3710/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3710/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3709
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3709/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3709/comments
https://api.github.com/repos/huggingface/datasets/issues/3709/events
https://github.com/huggingface/datasets/pull/3709
1,132,997,904
PR_kwDODunzps4yi0J4
3,709
Set base path to hub url for canonical datasets
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2022-02-11T19:23:20
2022-02-11T19:23:20
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3709", "html_url": "https://github.com/huggingface/datasets/pull/3709", "diff_url": "https://github.com/huggingface/datasets/pull/3709.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3709.patch", "merged_at": null }
This should allow canonical datasets to use relative paths to download data files from the Hub cc @polinaeterna this will be useful if we have audio datasets that are canonical and for which you'd like to host data files
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3709/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3709/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3708
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3708/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3708/comments
https://api.github.com/repos/huggingface/datasets/issues/3708/events
https://github.com/huggingface/datasets/issues/3708
1,132,968,402
I_kwDODunzps5Dh7nS
3,708
Loading JSON gets stuck with many workers/threads
{ "login": "lvwerra", "id": 8264887, "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lvwerra", "html_url": "https://github.com/lvwerra", "followers_url": "https://api.github.com/users/lvwerra/followers", "following_url": "https://api.github.com/users/lvwerra/following{/other_user}", "gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}", "starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions", "organizations_url": "https://api.github.com/users/lvwerra/orgs", "repos_url": "https://api.github.com/users/lvwerra/repos", "events_url": "https://api.github.com/users/lvwerra/events{/privacy}", "received_events_url": "https://api.github.com/users/lvwerra/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
2
2022-02-11T18:50:48
2022-02-11T20:57:53
null
CONTRIBUTOR
null
null
null
## Describe the bug Loading a JSON dataset with `load_dataset` can get stuck when running on a machine with many CPUs. This is especially an issue when loading a large dataset on a large machine. ## Steps to reproduce the bug I originally created the following script to reproduce the issue: ```python from datasets import load_dataset from multiprocessing import Process from tqdm import tqdm import datasets from transformers import set_seed def run_tasks_in_parallel(tasks, ds_list): for _ in tqdm(range(1000)): print('new batch') running_tasks = [Process(target=task, args=(ds, i)) for i, (task, ds) in enumerate(zip(tasks, ds_list))] for running_task in running_tasks: running_task.start() for running_task in running_tasks: running_task.join() def get_dataset(): dataset_name = 'transformersbook/codeparrot' ds = load_dataset(dataset_name+'-train', split="train", streaming=True) ds = ds.shuffle(buffer_size=1000, seed=1) return iter(ds) def get_next_element(ds, process_id, N=10000): for _ in range(N): _ = next(ds)['content'] print(f'process {process_id} done') return set_seed(1) datasets.utils.logging.set_verbosity_debug() n_processes = 8 tasks = [get_next_element for _ in range(n_processes)] args = [get_dataset() for _ in range(n_processes)] run_tasks_in_parallel(tasks, args) ``` Today I noticed that it can happen when running it on a single process on a machine with many cores without streaming. So just `load_dataset("transformersbook/codeparrot-train")` alone might cause the issue after waiting long enough or trying many times. It's a slightly random process which makes it especially hard to track down. When I encountered it today it had already processed 17GB of data (the size of the cache folder when it got stuck) before getting stuck. Here's my current understanding of the error. As far as I can tell it happens in the following block: https://github.com/huggingface/datasets/blob/be701e9e89ab38022612c7263edc015bc7feaff9/src/datasets/packaged_modules/json/json.py#L119-L139 When the try on line 121 fails and the `block_size` is increased it can happen that it can't read the JSON again and gets stuck indefinitely. A hint that points in that direction is that increasing the `chunksize` argument decreases the chance of getting stuck and vice versa. Maybe it is an issue with a lock on the file that is not properly released. ## Expected results Read a JSON before the end of the universe. ## Actual results Read a JSON not before the end of the universe. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-glibc2.28 - Python version: 3.9.10 - PyArrow version: 7.0.0 @lhoestq we dicsussed this a while ago. @albertvillanova we discussed this today :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3708/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3708/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3707
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3707/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3707/comments
https://api.github.com/repos/huggingface/datasets/issues/3707/events
https://github.com/huggingface/datasets/issues/3707
1,132,741,903
I_kwDODunzps5DhEUP
3,707
`.select`: unexpected behavior with `indices`
{ "login": "gabegma", "id": 36087158, "node_id": "MDQ6VXNlcjM2MDg3MTU4", "avatar_url": "https://avatars.githubusercontent.com/u/36087158?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gabegma", "html_url": "https://github.com/gabegma", "followers_url": "https://api.github.com/users/gabegma/followers", "following_url": "https://api.github.com/users/gabegma/following{/other_user}", "gists_url": "https://api.github.com/users/gabegma/gists{/gist_id}", "starred_url": "https://api.github.com/users/gabegma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gabegma/subscriptions", "organizations_url": "https://api.github.com/users/gabegma/orgs", "repos_url": "https://api.github.com/users/gabegma/repos", "events_url": "https://api.github.com/users/gabegma/events{/privacy}", "received_events_url": "https://api.github.com/users/gabegma/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
2
2022-02-11T15:20:01
2022-02-11T20:53:53
null
NONE
null
null
null
## Describe the bug The `.select` method will not throw when sending `indices` bigger than the dataset length; `indices` will be wrapped instead. This behavior is not documented anywhere, and is not intuitive. ## Steps to reproduce the bug ```python from datasets import Dataset ds = Dataset.from_dict({"text": ["d", "e", "f"], "label": [4, 5, 6]}) res1 = ds.select([1, 2, 3])['text'] res2 = ds.select([1000])['text'] ``` ## Expected results Both results should throw an `Error`. ## Actual results `res1` will give `['e', 'f', 'd']` `res2` will give `['e']` ## Environment info Bug found from this environment: - `datasets` version: 1.16.1 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.7 - PyArrow version: 6.0.1 It was also replicated on `master`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3707/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3707/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3706
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3706/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3706/comments
https://api.github.com/repos/huggingface/datasets/issues/3706/events
https://github.com/huggingface/datasets/issues/3706
1,132,218,874
I_kwDODunzps5DfEn6
3,706
Unable to load dataset 'big_patent'
{ "login": "ankitk2109", "id": 26432753, "node_id": "MDQ6VXNlcjI2NDMyNzUz", "avatar_url": "https://avatars.githubusercontent.com/u/26432753?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ankitk2109", "html_url": "https://github.com/ankitk2109", "followers_url": "https://api.github.com/users/ankitk2109/followers", "following_url": "https://api.github.com/users/ankitk2109/following{/other_user}", "gists_url": "https://api.github.com/users/ankitk2109/gists{/gist_id}", "starred_url": "https://api.github.com/users/ankitk2109/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ankitk2109/subscriptions", "organizations_url": "https://api.github.com/users/ankitk2109/orgs", "repos_url": "https://api.github.com/users/ankitk2109/repos", "events_url": "https://api.github.com/users/ankitk2109/events{/privacy}", "received_events_url": "https://api.github.com/users/ankitk2109/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
4
2022-02-11T09:48:34
2022-02-11T14:28:20
null
NONE
null
null
null
## Describe the bug Unable to load the "big_patent" dataset ## Steps to reproduce the bug ```python load_dataset('big_patent', 'd', 'validation') ``` ## Expected results Download big_patents' validation split from the 'd' subset ## Getting an error saying: {FileNotFoundError}Local file ..\huggingface\datasets\downloads\6159313604f4f2c01e7d1cac52139343b6c07f73f6de348d09be6213478455c5\bigPatentData\train.tar.gz doesn't exist ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version:1.18.3 - Platform: Windows - Python version:3.8 - PyArrow version:7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3706/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3706/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3705
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3705/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3705/comments
https://api.github.com/repos/huggingface/datasets/issues/3705/events
https://github.com/huggingface/datasets/pull/3705
1,132,053,226
PR_kwDODunzps4yfhyj
3,705
Raise informative error when loading a save_to_disk dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2022-02-11T08:21:03
2022-02-11T22:56:40
2022-02-11T22:56:39
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3705", "html_url": "https://github.com/huggingface/datasets/pull/3705", "diff_url": "https://github.com/huggingface/datasets/pull/3705.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3705.patch", "merged_at": "2022-02-11T22:56:39" }
People recurrently report error when trying to load a dataset (using `load_dataset`) that was previously saved using `save_to_disk`. This PR raises an informative error message telling them they should use `load_from_disk` instead. Close #3700.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3705/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3705/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3704
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3704/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3704/comments
https://api.github.com/repos/huggingface/datasets/issues/3704/events
https://github.com/huggingface/datasets/issues/3704
1,132,042,631
I_kwDODunzps5DeZmH
3,704
OSCAR-2109 datasets are misaligned and truncated
{ "login": "adrianeboyd", "id": 5794899, "node_id": "MDQ6VXNlcjU3OTQ4OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5794899?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adrianeboyd", "html_url": "https://github.com/adrianeboyd", "followers_url": "https://api.github.com/users/adrianeboyd/followers", "following_url": "https://api.github.com/users/adrianeboyd/following{/other_user}", "gists_url": "https://api.github.com/users/adrianeboyd/gists{/gist_id}", "starred_url": "https://api.github.com/users/adrianeboyd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adrianeboyd/subscriptions", "organizations_url": "https://api.github.com/users/adrianeboyd/orgs", "repos_url": "https://api.github.com/users/adrianeboyd/repos", "events_url": "https://api.github.com/users/adrianeboyd/events{/privacy}", "received_events_url": "https://api.github.com/users/adrianeboyd/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
4
2022-02-11T08:14:59
2022-02-11T10:41:41
null
NONE
null
null
null
## Describe the bug The `oscar-corpus/OSCAR-2109` data appears to be misaligned and truncated by the dataset builder for subsets that contain more than one part and for cases where the texts contain non-unix newlines. ## Steps to reproduce the bug A few examples, although I'm not sure how deterministic the particular (mis)alignment is in various configurations: ```python from datasets import load_dataset dataset = load_dataset("oscar-corpus/OSCAR-2109", "deduplicated_fi", split="train", use_auth_token=True) entry = dataset[0] # entry["text"] is from fi_part_3.txt.gz # entry["meta"] is from fi_meta_part_2.jsonl.gz dataset = load_dataset("oscar-corpus/OSCAR-2109", "deduplicated_no", split="train", use_auth_token=True) entry = dataset[900000] # entry["text"] is from no_part_3.txt.gz and contains a blank line # entry["meta"] is from no_meta_part_1.jsonl.gz dataset = load_dataset("oscar-corpus/OSCAR-2109", "deduplicated_mk", split="train", streaming=True, use_auth_token=True) # 9088 texts in the dataset are empty ``` For `deduplicated_fi`, all exported raw texts from the dataset are 17GB rather than 20GB as reported in the data splits overview table. The token count with `wc -w` for the raw texts is 2,067,556,874 rather than the expected 2,357,264,196 from the data splits table. For `deduplicated_no` all exported raw texts contain 624,040,887 rather than the expected 776,354,517 tokens. For `deduplicated_mk` it is 122,236,936 rather than 134,544,934 tokens. I'm not expecting the `wc -w` counts to line up exactly with the data splits table, but for comparison the `wc -w` count for `deduplicated_mk` on the raw texts is 134,545,424. ## Issues * The meta / text files are not paired correctly when loading, so the extracted texts do not have the right offsets, the metadata is not associated with the correct text, and the text files may not be processed to the end or may be processed beyond the end (empty texts). * The line count offset is not reset per file so the texts aren't aligned to the right offsets in any parts beyond the first part, leading to truncation when in effect blank lines are not skipped. * Non-unix newline characters are treated as newlines when reading the text files while the metadata only counts unix newlines for its line offsets, leading to further misalignments between the metadata and the extracted texts, and which also results in truncation. ## Expected results All texts from the OSCAR release are extracted according to the metadata and aligned with the correct metadata. ## Fixes Not necessarily the exact fixes/checks you may want to use (I didn't test all languages or do any cross-platform testing, I'm not sure all the details are compatible with streaming), however to highlight the issues: ```diff diff --git a/OSCAR-2109.py b/OSCAR-2109.py index bbac1076..5eee8de7 100644 --- a/OSCAR-2109.py +++ b/OSCAR-2109.py @@ -20,6 +20,7 @@ import collections import gzip import json +import os import datasets @@ -387,9 +388,20 @@ class Oscar2109(datasets.GeneratorBasedBuilder): with open(checksum_file, encoding="utf-8") as f: data_filenames = [line.split()[1] for line in f if line] data_urls = [self.config.base_data_path + data_filename for data_filename in data_filenames] - text_files = dl_manager.download([url for url in data_urls if url.endswith(".txt.gz")]) - metadata_files = dl_manager.download([url for url in data_urls if url.endswith(".jsonl.gz")]) + # sort filenames so corresponding parts are aligned + text_files = sorted(dl_manager.download([url for url in data_urls if url.endswith(".txt.gz")])) + metadata_files = sorted(dl_manager.download([url for url in data_urls if url.endswith(".jsonl.gz")])) + assert len(text_files) == len(metadata_files) metadata_and_text_files = list(zip(metadata_files, text_files)) + for meta_path, text_path in metadata_and_text_files: + # check that meta/text part numbers are the same + if "part" in os.path.basename(text_path): + assert ( + os.path.basename(text_path).replace(".txt.gz", "").split("_")[-1] + == os.path.basename(meta_path).replace(".jsonl.gz", "").split("_")[-1] + ) + else: + assert len(metadata_and_text_files) == 1 return [ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"metadata_and_text_files": metadata_and_text_files}), ] @@ -397,10 +409,14 @@ class Oscar2109(datasets.GeneratorBasedBuilder): def _generate_examples(self, metadata_and_text_files): """This function returns the examples in the raw (text) form by iterating on all the files.""" id_ = 0 - offset = 0 for meta_path, text_path in metadata_and_text_files: + # line offsets are per text file + offset = 0 logger.info("generating examples from = %s", text_path) - with gzip.open(open(text_path, "rb"), "rt", encoding="utf-8") as text_f: + # some texts contain non-Unix newlines that should not be + # interpreted as line breaks for the line counts in the metadata + # with readline() + with gzip.open(open(text_path, "rb"), "rt", encoding="utf-8", newline="\n") as text_f: with gzip.open(open(meta_path, "rb"), "rt", encoding="utf-8") as meta_f: for line in meta_f: # read meta @@ -411,7 +427,12 @@ class Oscar2109(datasets.GeneratorBasedBuilder): offset += 1 text_f.readline() # read text - text = "".join([text_f.readline() for _ in range(meta["nb_sentences"])]).rstrip() + text_lines = [text_f.readline() for _ in range(meta["nb_sentences"])] + # all lines contain text (no blank lines or EOF) + assert all(text_lines) + assert "\n" not in text_lines offset += meta["nb_sentences"] + # only strip the trailing newline + text = "".join(text_lines).rstrip("\n") yield id_, {"id": id_, "text": text, "meta": meta} id_ += 1 ``` I've tested this with a number of smaller deduplicated languages with 1-20 parts and the resulting datasets looked correct in terms of word count and size when compared to the data splits table and raw texts, and the text/metadata alignments were correct in all my spot checks. However, there are many many languages I didn't test and I'm not sure that there aren't any texts containing blank lines in the corpus, for instance. For the cases I tested, the assertions related to blank lines and EOF made it easier to verify that the text and metadata were aligned as intended, since there would be little chance of spurious alignments of variable-length texts across so much data.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3704/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3704/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3703
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3703/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3703/comments
https://api.github.com/repos/huggingface/datasets/issues/3703/events
https://github.com/huggingface/datasets/issues/3703
1,131,882,772
I_kwDODunzps5DdykU
3,703
ImportError: To be able to use this metric, you need to install the following dependencies['seqeval'] using 'pip install seqeval' for instance'
{ "login": "zhangyifei1", "id": 28425091, "node_id": "MDQ6VXNlcjI4NDI1MDkx", "avatar_url": "https://avatars.githubusercontent.com/u/28425091?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhangyifei1", "html_url": "https://github.com/zhangyifei1", "followers_url": "https://api.github.com/users/zhangyifei1/followers", "following_url": "https://api.github.com/users/zhangyifei1/following{/other_user}", "gists_url": "https://api.github.com/users/zhangyifei1/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhangyifei1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhangyifei1/subscriptions", "organizations_url": "https://api.github.com/users/zhangyifei1/orgs", "repos_url": "https://api.github.com/users/zhangyifei1/repos", "events_url": "https://api.github.com/users/zhangyifei1/events{/privacy}", "received_events_url": "https://api.github.com/users/zhangyifei1/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
open
false
null
[]
null
2
2022-02-11T06:38:42
2022-02-11T06:40:18
null
NONE
null
null
null
hi : I want to use the seqeval indicator because of direct load_ When metric ('seqeval '), it will prompt that the network connection fails. So I downloaded the seqeval Py to load locally. Loading code: metric = load_ metric(path='mymetric/seqeval/seqeval.py') But tips: Traceback (most recent call last): File "/home/ubuntu/Python3.6_project/zyf_project/transformers/examples/pytorch/token-classification/run_ner.py", line 604, in <module> main() File "/home/ubuntu/Python3.6_project/zyf_project/transformers/examples/pytorch/token-classification/run_ner.py", line 481, in main metric = load_metric(path='mymetric/seqeval/seqeval.py') File "/home/ubuntu/Python3.6_project/zyf_project/transformers_venv_0209/lib/python3.7/site-packages/datasets/load.py", line 610, in load_metric dataset=False, File "/home/ubuntu/Python3.6_project/zyf_project/transformers_venv_0209/lib/python3.7/site-packages/datasets/load.py", line 450, in prepare_module f"To be able to use this {module_type}, you need to install the following dependencies" ImportError: To be able to use this metric, you need to install the following dependencies['seqeval'] using 'pip install seqeval' for instance' **What should I do? Please help me, thank you**
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3703/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3703/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3702
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3702/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3702/comments
https://api.github.com/repos/huggingface/datasets/issues/3702/events
https://github.com/huggingface/datasets/pull/3702
1,130,666,707
PR_kwDODunzps4yahKc
3,702
Update the address to use https
{ "login": "yazdanbakhsh", "id": 7105134, "node_id": "MDQ6VXNlcjcxMDUxMzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/7105134?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yazdanbakhsh", "html_url": "https://github.com/yazdanbakhsh", "followers_url": "https://api.github.com/users/yazdanbakhsh/followers", "following_url": "https://api.github.com/users/yazdanbakhsh/following{/other_user}", "gists_url": "https://api.github.com/users/yazdanbakhsh/gists{/gist_id}", "starred_url": "https://api.github.com/users/yazdanbakhsh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yazdanbakhsh/subscriptions", "organizations_url": "https://api.github.com/users/yazdanbakhsh/orgs", "repos_url": "https://api.github.com/users/yazdanbakhsh/repos", "events_url": "https://api.github.com/users/yazdanbakhsh/events{/privacy}", "received_events_url": "https://api.github.com/users/yazdanbakhsh/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2022-02-10T18:46:30
2022-02-10T18:46:30
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3702", "html_url": "https://github.com/huggingface/datasets/pull/3702", "diff_url": "https://github.com/huggingface/datasets/pull/3702.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3702.patch", "merged_at": null }
The http address doesn't work anymore
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3702/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3702/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3701
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3701/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3701/comments
https://api.github.com/repos/huggingface/datasets/issues/3701/events
https://github.com/huggingface/datasets/pull/3701
1,130,498,738
PR_kwDODunzps4yZ8Dw
3,701
Pin ElasticSearch
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2022-02-10T17:15:26
2022-02-10T17:31:13
2022-02-10T17:31:12
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3701", "html_url": "https://github.com/huggingface/datasets/pull/3701", "diff_url": "https://github.com/huggingface/datasets/pull/3701.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3701.patch", "merged_at": "2022-02-10T17:31:12" }
Until we manage to support ES 8.0, I'm setting the version to `<8.0.0` Currently we're getting this error on 8.0: ```python ValueError: Either 'hosts' or 'cloud_id' must be specified ``` When instantiating a `Elasticsearch()` object
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3701/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3701/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3700
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3700/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3700/comments
https://api.github.com/repos/huggingface/datasets/issues/3700/events
https://github.com/huggingface/datasets/issues/3700
1,130,252,496
I_kwDODunzps5DXkjQ
3,700
Unable to load a dataset
{ "login": "PaulchauvinAI", "id": 97964230, "node_id": "U_kgDOBdbQxg", "avatar_url": "https://avatars.githubusercontent.com/u/97964230?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PaulchauvinAI", "html_url": "https://github.com/PaulchauvinAI", "followers_url": "https://api.github.com/users/PaulchauvinAI/followers", "following_url": "https://api.github.com/users/PaulchauvinAI/following{/other_user}", "gists_url": "https://api.github.com/users/PaulchauvinAI/gists{/gist_id}", "starred_url": "https://api.github.com/users/PaulchauvinAI/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulchauvinAI/subscriptions", "organizations_url": "https://api.github.com/users/PaulchauvinAI/orgs", "repos_url": "https://api.github.com/users/PaulchauvinAI/repos", "events_url": "https://api.github.com/users/PaulchauvinAI/events{/privacy}", "received_events_url": "https://api.github.com/users/PaulchauvinAI/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2022-02-10T15:05:53
2022-02-11T22:56:39
2022-02-11T22:56:39
NONE
null
null
null
## Describe the bug Unable to load a dataset from Huggingface that I have just saved. ## Steps to reproduce the bug On Google colab `! pip install datasets ` `from datasets import load_dataset` `my_path = "wiki_dataset"` `dataset = load_dataset('wikipedia', "20200501.fr")` `dataset.save_to_disk(my_path)` `dataset = load_dataset(my_path)` ## Expected results Loading the dataset ## Actual results ValueError: Couldn't cast _data_files: list<item: struct<filename: string>> child 0, item: struct<filename: string> child 0, filename: string _fingerprint: string _format_columns: null _format_kwargs: struct<> _format_type: null _indexes: struct<> _output_all_columns: bool _split: string to {'builder_name': Value(dtype='string', id=None), 'citation': Value(dtype='string', id=None), 'config_name': Value(dtype='string', id=None), 'dataset_size': Value(dtype='int64', id=None), 'description': Value(dtype='string', id=None), 'download_checksums': {}, 'download_size': Value(dtype='int64', id=None), 'features': {'title': {'dtype': Value(dtype='string', id=None), 'id': Value(dtype='null', id=None), '_type': Value(dtype='string', id=None)}, 'text': {'dtype': Value(dtype='string', id=None), 'id': Value(dtype='null', id=None), '_type': Value(dtype='string', id=None)}}, 'homepage': Value(dtype='string', id=None), 'license': Value(dtype='string', id=None), 'post_processed': Value(dtype='null', id=None), 'post_processing_size': Value(dtype='null', id=None), 'size_in_bytes': Value(dtype='int64', id=None), 'splits': {'train': {'name': Value(dtype='string', id=None), 'num_bytes': Value(dtype='int64', id=None), 'num_examples': Value(dtype='int64', id=None), 'dataset_name': Value(dtype='string', id=None)}}, 'supervised_keys': Value(dtype='null', id=None), 'task_templates': Value(dtype='null', id=None), 'version': {'version_str': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'major': Value(dtype='int64', id=None), 'minor': Value(dtype='int64', id=None), 'patch': Value(dtype='int64', id=None)}} because column names don't match ## Environment info - `datasets` version: 1.18.3 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3700/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3700/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3699
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3699/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3699/comments
https://api.github.com/repos/huggingface/datasets/issues/3699/events
https://github.com/huggingface/datasets/pull/3699
1,130,200,593
PR_kwDODunzps4yY49I
3,699
Add dev-only config to Natural Questions dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2022-02-10T14:42:24
2022-02-11T09:50:22
2022-02-11T09:50:21
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3699", "html_url": "https://github.com/huggingface/datasets/pull/3699", "diff_url": "https://github.com/huggingface/datasets/pull/3699.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3699.patch", "merged_at": "2022-02-11T09:50:21" }
As suggested by @lhoestq and @thomwolf, a new config has been added to Natural Questions dataset, so that only dev split can be downloaded. Fix #413.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3699/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3699/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3698
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3698/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3698/comments
https://api.github.com/repos/huggingface/datasets/issues/3698/events
https://github.com/huggingface/datasets/pull/3698
1,129,864,282
PR_kwDODunzps4yXtyQ
3,698
Add finetune-data CodeFill
{ "login": "rgismondi", "id": 49989029, "node_id": "MDQ6VXNlcjQ5OTg5MDI5", "avatar_url": "https://avatars.githubusercontent.com/u/49989029?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rgismondi", "html_url": "https://github.com/rgismondi", "followers_url": "https://api.github.com/users/rgismondi/followers", "following_url": "https://api.github.com/users/rgismondi/following{/other_user}", "gists_url": "https://api.github.com/users/rgismondi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rgismondi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rgismondi/subscriptions", "organizations_url": "https://api.github.com/users/rgismondi/orgs", "repos_url": "https://api.github.com/users/rgismondi/repos", "events_url": "https://api.github.com/users/rgismondi/events{/privacy}", "received_events_url": "https://api.github.com/users/rgismondi/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2022-02-10T11:12:51
2022-02-10T11:12:51
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3698", "html_url": "https://github.com/huggingface/datasets/pull/3698", "diff_url": "https://github.com/huggingface/datasets/pull/3698.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3698.patch", "merged_at": null }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3698/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3698/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3697
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3697/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3697/comments
https://api.github.com/repos/huggingface/datasets/issues/3697/events
https://github.com/huggingface/datasets/pull/3697
1,129,795,724
PR_kwDODunzps4yXeXo
3,697
Add code-fill datasets for pretraining/finetuning/evaluating
{ "login": "rgismondi", "id": 49989029, "node_id": "MDQ6VXNlcjQ5OTg5MDI5", "avatar_url": "https://avatars.githubusercontent.com/u/49989029?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rgismondi", "html_url": "https://github.com/rgismondi", "followers_url": "https://api.github.com/users/rgismondi/followers", "following_url": "https://api.github.com/users/rgismondi/following{/other_user}", "gists_url": "https://api.github.com/users/rgismondi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rgismondi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rgismondi/subscriptions", "organizations_url": "https://api.github.com/users/rgismondi/orgs", "repos_url": "https://api.github.com/users/rgismondi/repos", "events_url": "https://api.github.com/users/rgismondi/events{/privacy}", "received_events_url": "https://api.github.com/users/rgismondi/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2022-02-10T10:31:48
2022-02-10T11:00:44
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3697", "html_url": "https://github.com/huggingface/datasets/pull/3697", "diff_url": "https://github.com/huggingface/datasets/pull/3697.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3697.patch", "merged_at": null }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3697/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3697/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3696
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3696/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3696/comments
https://api.github.com/repos/huggingface/datasets/issues/3696/events
https://github.com/huggingface/datasets/pull/3696
1,129,764,534
PR_kwDODunzps4yXXgH
3,696
Force unique keys in newsqa dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2022-02-10T10:09:19
2022-02-10T10:09:19
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3696", "html_url": "https://github.com/huggingface/datasets/pull/3696", "diff_url": "https://github.com/huggingface/datasets/pull/3696.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3696.patch", "merged_at": null }
Currently, it may raise `DuplicatedKeysError`. Fix #3630.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3696/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3696/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3695
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3695/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3695/comments
https://api.github.com/repos/huggingface/datasets/issues/3695/events
https://github.com/huggingface/datasets/pull/3695
1,129,730,148
PR_kwDODunzps4yXP44
3,695
Fix ClassLabel to/from dict when passed names_file
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2022-02-10T09:47:10
2022-02-11T23:02:32
2022-02-11T23:02:31
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3695", "html_url": "https://github.com/huggingface/datasets/pull/3695", "diff_url": "https://github.com/huggingface/datasets/pull/3695.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3695.patch", "merged_at": "2022-02-11T23:02:31" }
Currently, `names_file` is a field of the data class `ClassLabel`, thus appearing when transforming it to dict (when saving infos). Afterwards, when trying to read it from infos, it conflicts with the other field `names`. This PR, removes `names_file` as a field of the data class `ClassLabel`. - it is only used at instantiation to generate the `labels` field Fix #3631.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3695/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3695/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3693
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3693/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3693/comments
https://api.github.com/repos/huggingface/datasets/issues/3693/events
https://github.com/huggingface/datasets/pull/3693
1,128,554,365
PR_kwDODunzps4yTTcQ
3,693
Standardize to `Example::`
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2022-02-09T13:37:13
2022-02-09T13:37:13
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3693", "html_url": "https://github.com/huggingface/datasets/pull/3693", "diff_url": "https://github.com/huggingface/datasets/pull/3693.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3693.patch", "merged_at": null }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3693/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3693/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3692
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3692/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3692/comments
https://api.github.com/repos/huggingface/datasets/issues/3692/events
https://github.com/huggingface/datasets/pull/3692
1,128,320,004
PR_kwDODunzps4yShiu
3,692
Update data URL in pubmed dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
2022-02-09T10:06:21
2022-02-10T14:58:00
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3692", "html_url": "https://github.com/huggingface/datasets/pull/3692", "diff_url": "https://github.com/huggingface/datasets/pull/3692.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3692.patch", "merged_at": null }
Fix #3655.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3692/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3692/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3691
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3691/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3691/comments
https://api.github.com/repos/huggingface/datasets/issues/3691/events
https://github.com/huggingface/datasets/pull/3691
1,127,629,306
PR_kwDODunzps4yQThV
3,691
Upgrade black to version ~=22.0
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2022-02-08T18:45:19
2022-02-08T19:56:40
2022-02-08T19:56:39
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3691", "html_url": "https://github.com/huggingface/datasets/pull/3691", "diff_url": "https://github.com/huggingface/datasets/pull/3691.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3691.patch", "merged_at": "2022-02-08T19:56:39" }
Upgrades the `datasets` library quality tool `black` to use the first stable release of `black`, version 22.0.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3691/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3691/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3690
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3690/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3690/comments
https://api.github.com/repos/huggingface/datasets/issues/3690/events
https://github.com/huggingface/datasets/pull/3690
1,127,493,538
PR_kwDODunzps4yP2p5
3,690
WIP: update docs to new frontend/UI
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2022-02-08T16:38:09
2022-02-11T16:22:10
null
NONE
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3690", "html_url": "https://github.com/huggingface/datasets/pull/3690", "diff_url": "https://github.com/huggingface/datasets/pull/3690.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3690.patch", "merged_at": null }
### TLDR: Update `datasets` `docs` to the new syntax & frontend (as how it looks on [hf.co/transformers](https://huggingface.co/docs/transformers/index)) ## Checklist - [ ] update datasets docs to new syntax (should call `doc-builder convert`) (this PR) - [x] discuss `@property` methods frontend https://github.com/huggingface/doc-builder/pull/87 - [x] discuss `inject_arrow_table_documentation` (this PR) https://github.com/huggingface/datasets/pull/3690#discussion_r801847860 - [x] update datasets docs path on moon-landing https://github.com/huggingface/moon-landing/pull/2089 - [ ] update nginx `docs/datasets` to route to moon-landing (do similar to internal repo # 81) - [x] convert pyarrow docstring from Numpydoc style to groups style https://github.com/huggingface/doc-builder/pull/89(https://stackoverflow.com/a/24385103/6558628) - [x] handle `Raises` section on frontend and doc-builder https://github.com/huggingface/doc-builder/pull/86 - [x] check imgs path (this PR) (nothing to update here) - [ ] delete sphinx related files (this PR) - [ ] update github actions (doc quality check & PR doc) - [x] doc exaples block has to follow format `Examples::` https://github.com/huggingface/datasets/pull/3693 - [x] add `versions.yml` in doc-build https://github.com/huggingface/doc-build/pull/1 - [ ] add `versions.yml` in doc-build-dev
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3690/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3690/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3689
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3689/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3689/comments
https://api.github.com/repos/huggingface/datasets/issues/3689/events
https://github.com/huggingface/datasets/pull/3689
1,127,422,478
PR_kwDODunzps4yPnp7
3,689
Fix streaming for servers not supporting HTTP range requests
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
10
2022-02-08T15:41:05
2022-02-10T16:51:25
2022-02-10T16:51:25
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3689", "html_url": "https://github.com/huggingface/datasets/pull/3689", "diff_url": "https://github.com/huggingface/datasets/pull/3689.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3689.patch", "merged_at": "2022-02-10T16:51:24" }
Some servers do not support HTTP range requests, whereas this is required to stream some file formats (like ZIP). ~~This PR implements a workaround for those cases, by download the files locally in a temporary directory (cleaned up by the OS once the process is finished).~~ This PR raises custom error explaining that streaming is not possible because data host server does not support HTTP range requests. Fix #3677.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3689/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3689/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3688
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3688/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3688/comments
https://api.github.com/repos/huggingface/datasets/issues/3688/events
https://github.com/huggingface/datasets/issues/3688
1,127,218,321
I_kwDODunzps5DL_yR
3,688
Pyarrow version error
{ "login": "Zaker237", "id": 49993443, "node_id": "MDQ6VXNlcjQ5OTkzNDQz", "avatar_url": "https://avatars.githubusercontent.com/u/49993443?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Zaker237", "html_url": "https://github.com/Zaker237", "followers_url": "https://api.github.com/users/Zaker237/followers", "following_url": "https://api.github.com/users/Zaker237/following{/other_user}", "gists_url": "https://api.github.com/users/Zaker237/gists{/gist_id}", "starred_url": "https://api.github.com/users/Zaker237/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Zaker237/subscriptions", "organizations_url": "https://api.github.com/users/Zaker237/orgs", "repos_url": "https://api.github.com/users/Zaker237/repos", "events_url": "https://api.github.com/users/Zaker237/events{/privacy}", "received_events_url": "https://api.github.com/users/Zaker237/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
3
2022-02-08T12:53:59
2022-02-09T06:35:33
2022-02-09T06:35:32
NONE
null
null
null
## Describe the bug I installed datasets(version 1.17.0, 1.18.0, 1.18.3) but i'm right now nor able to import it because of pyarrow. when i try to import it, i get the following error: `To use datasets, the module pyarrow>=3.0.0 is required, and the current version of pyarrow doesn't match this condition`. i tryed with all version of pyarrow execpt `4.0.0` but still get the same error. ## Steps to reproduce the bug ```python import datasets ``` ## Expected results A clear and concise description of the expected results. ## Actual results AttributeError Traceback (most recent call last) <ipython-input-19-652e886d387f> in <module> ----> 1 import datasets ~\AppData\Local\Continuum\anaconda3\lib\site-packages\datasets\__init__.py in <module> 26 27 ---> 28 if _version.parse(pyarrow.__version__).major < 3: 29 raise ImportWarning( 30 "To use `datasets`, the module `pyarrow>=3.0.0` is required, and the current version of `pyarrow` doesn't match this condition.\n" AttributeError: 'Version' object has no attribute 'major' ## Environment info Traceback (most recent call last): File "c:\users\alex\appdata\local\continuum\anaconda3\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "c:\users\alex\appdata\local\continuum\anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\Alex\AppData\Local\Continuum\anaconda3\Scripts\datasets-cli.exe\__main__.py", line 5, in <module> File "c:\users\alex\appdata\local\continuum\anaconda3\lib\site-packages\datasets\__init__.py", line 28, in <module> if _version.parse(pyarrow.__version__).major < 3: AttributeError: 'Version' object has no attribute 'major' - `datasets` version: - Platform: Linux(Ubuntu) and Windows: conda on the both - Python version: 3.7 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3688/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3688/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3687
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3687/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3687/comments
https://api.github.com/repos/huggingface/datasets/issues/3687/events
https://github.com/huggingface/datasets/issues/3687
1,127,154,766
I_kwDODunzps5DLwRO
3,687
Can't get the text data when calling to_tf_dataset
{ "login": "phrasenmaeher", "id": 82086367, "node_id": "MDQ6VXNlcjgyMDg2MzY3", "avatar_url": "https://avatars.githubusercontent.com/u/82086367?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phrasenmaeher", "html_url": "https://github.com/phrasenmaeher", "followers_url": "https://api.github.com/users/phrasenmaeher/followers", "following_url": "https://api.github.com/users/phrasenmaeher/following{/other_user}", "gists_url": "https://api.github.com/users/phrasenmaeher/gists{/gist_id}", "starred_url": "https://api.github.com/users/phrasenmaeher/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phrasenmaeher/subscriptions", "organizations_url": "https://api.github.com/users/phrasenmaeher/orgs", "repos_url": "https://api.github.com/users/phrasenmaeher/repos", "events_url": "https://api.github.com/users/phrasenmaeher/events{/privacy}", "received_events_url": "https://api.github.com/users/phrasenmaeher/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[ { "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false } ]
null
5
2022-02-08T11:52:10
2022-02-08T16:54:55
null
NONE
null
null
null
I am working with the SST2 dataset, and am using TensorFlow 2.5 I'd like to convert it to a `tf.data.Dataset` by calling the `to_tf_dataset` method. The following snippet is what I am using to achieve this: ``` from datasets import load_dataset from transformers import DefaultDataCollator data_collator = DefaultDataCollator(return_tensors="tf") dataset = load_dataset("sst") train_dataset = dataset["train"].to_tf_dataset(columns=['sentence'], label_cols="label", shuffle=True, batch_size=8,collate_fn=data_collator) ``` However, this only gets me the labels; the text--the most important part--is missing: ``` for s in train_dataset.take(1): print(s) #prints something like: ({}, <tf.Tensor: shape=(8,), ...>) ``` As you can see, it only returns the label part, not the data, as indicated by the empty dictionary, `{}`. So far, I've played with various settings of the method arguments, but to no avail; I do not want to perform any text processing at this time. On my quest to achieve what I want ( a `tf.data.Dataset`), I've consulted these resources: [https://www.philschmid.de/huggingface-transformers-keras-tf](https://www.philschmid.de/huggingface-transformers-keras-tf) [https://huggingface.co/docs/datasets/use_dataset.html?highlight=tensorflow](https://huggingface.co/docs/datasets/use_dataset.html?highlight=tensorflow) I was surprised to not find more extensive examples on how to transform a Hugginface dataset to one compatible with TensorFlow. If you could point me to where I am going wrong, please do so. Thanks in advance for your support. --- Edit: In the [docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.to_tf_dataset), I found the following description: _In general, only columns that the model can use as input should be included here (numeric data only)._ Does this imply that no textual, i.e., `string` data can be loaded?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3687/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3687/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3686
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3686/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3686/comments
https://api.github.com/repos/huggingface/datasets/issues/3686/events
https://github.com/huggingface/datasets/issues/3686
1,127,137,290
I_kwDODunzps5DLsAK
3,686
`Translation` features cannot be `flatten`ed
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
1
2022-02-08T11:33:48
2022-02-08T13:52:34
null
CONTRIBUTOR
null
null
null
## Describe the bug (`Dataset.flatten`)[https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L1265] fails for columns with feature (`Translation`)[https://github.com/huggingface/datasets/blob/3edbeb0ec6519b79f1119adc251a1a6b379a2c12/src/datasets/features/translation.py#L8] ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("europa_ecdc_tm", "en2fr", split="train[:10]") print(dataset.features) # {'translation': Translation(languages=['en', 'fr'], id=None)} print(dataset[0]) # {'translation': {'en': 'Vaccination against hepatitis C is not yet available.', 'fr': 'Aucune vaccination contre l’hépatite C n’est encore disponible.'}} dataset.flatten() ``` ## Expected results `dataset.flatten` should flatten the `Translation` column as if it were a dict of `Value("string")` ```python dataset[0] # {'translation.en': 'Vaccination against hepatitis C is not yet available.', 'translation.fr': 'Aucune vaccination contre l’hépatite C n’est encore disponible.' } dataset.features # {'translation.en': Value("string"), 'translation.fr': Value("string")} ``` ## Actual results ```python In [31]: dset.flatten() --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-31-bb88eb5276ee> in <module> ----> 1 dset.flatten() [...]\site-packages\datasets\fingerprint.py in wrapper(*args, **kwargs) 411 # Call actual function 412 --> 413 out = func(self, *args, **kwargs) 414 415 # Update fingerprint of in-place transforms + update in-place history of transforms [...]\site-packages\datasets\arrow_dataset.py in flatten(self, new_fingerprint, max_depth) 1294 break 1295 dataset.info.features = self.features.flatten(max_depth=max_depth) -> 1296 dataset._data = update_metadata_with_features(dataset._data, dataset.features) 1297 logger.info(f'Flattened dataset from depth {depth} to depth {1 if depth + 1 < max_depth else "unknown"}.') 1298 dataset._fingerprint = new_fingerprint [...]\site-packages\datasets\arrow_dataset.py in update_metadata_with_features(table, features) 534 def update_metadata_with_features(table: Table, features: Features): 535 """To be used in dataset transforms that modify the features of the dataset, in order to update the features stored in the metadata of its schema.""" --> 536 features = Features({col_name: features[col_name] for col_name in table.column_names}) 537 if table.schema.metadata is None or b"huggingface" not in table.schema.metadata: 538 pa_metadata = ArrowWriter._build_metadata(DatasetInfo(features=features)) [...]\site-packages\datasets\arrow_dataset.py in <dictcomp>(.0) 534 def update_metadata_with_features(table: Table, features: Features): 535 """To be used in dataset transforms that modify the features of the dataset, in order to update the features stored in the metadata of its schema.""" --> 536 features = Features({col_name: features[col_name] for col_name in table.column_names}) 537 if table.schema.metadata is None or b"huggingface" not in table.schema.metadata: 538 pa_metadata = ArrowWriter._build_metadata(DatasetInfo(features=features)) KeyError: 'translation.en' ``` ## Environment info - `datasets` version: 1.18.3 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.7.10 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3686/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3686/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3685
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3685/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3685/comments
https://api.github.com/repos/huggingface/datasets/issues/3685/events
https://github.com/huggingface/datasets/pull/3685
1,126,240,444
PR_kwDODunzps4yLw3m
3,685
Add support for `Audio` and `Image` feature in `push_to_hub`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
2022-02-07T16:47:16
2022-02-11T19:40:00
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3685", "html_url": "https://github.com/huggingface/datasets/pull/3685", "diff_url": "https://github.com/huggingface/datasets/pull/3685.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3685.patch", "merged_at": null }
Add support for the `Audio` and the `Image` feature in `push_to_hub`. The idea is to remove local path information and store file content under "bytes" in the Arrow table before the push. My initial approach (https://github.com/huggingface/datasets/commit/34c652afeff9686b6b8bf4e703c84d2205d670aa) was to use a map transform similar to [`decode_nested_example`](https://github.com/huggingface/datasets/blob/5e0f6068741464f833ff1802e24ecc2064aaea9f/src/datasets/features/features.py#L1023-L1056) while having decoding turned off, but I wasn't satisfied with the code quality, so I ended up using the `temporary_assignment` decorator to override `cast_storage`, which allows me to directly modify the underlying storage (the final op is similar to `Dataset.cast`) and results in a much simpler code. Additionally, I added the `allow_cast` flag that can disable this behavior in the situations where it's not needed (e.g. the dataset is already in the correct format for the Hub, etc.)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3685/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3685/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3684
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3684/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3684/comments
https://api.github.com/repos/huggingface/datasets/issues/3684/events
https://github.com/huggingface/datasets/pull/3684
1,125,133,664
PR_kwDODunzps4yIOer
3,684
[fix]: iwslt2017 download urls
{ "login": "msarmi9", "id": 48395294, "node_id": "MDQ6VXNlcjQ4Mzk1Mjk0", "avatar_url": "https://avatars.githubusercontent.com/u/48395294?v=4", "gravatar_id": "", "url": "https://api.github.com/users/msarmi9", "html_url": "https://github.com/msarmi9", "followers_url": "https://api.github.com/users/msarmi9/followers", "following_url": "https://api.github.com/users/msarmi9/following{/other_user}", "gists_url": "https://api.github.com/users/msarmi9/gists{/gist_id}", "starred_url": "https://api.github.com/users/msarmi9/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/msarmi9/subscriptions", "organizations_url": "https://api.github.com/users/msarmi9/orgs", "repos_url": "https://api.github.com/users/msarmi9/repos", "events_url": "https://api.github.com/users/msarmi9/events{/privacy}", "received_events_url": "https://api.github.com/users/msarmi9/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
4
2022-02-06T07:56:55
2022-02-09T08:39:31
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3684", "html_url": "https://github.com/huggingface/datasets/pull/3684", "diff_url": "https://github.com/huggingface/datasets/pull/3684.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3684.patch", "merged_at": null }
Fixes #2076.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3684/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3684/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3683
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3683/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3683/comments
https://api.github.com/repos/huggingface/datasets/issues/3683/events
https://github.com/huggingface/datasets/pull/3683
1,124,458,371
PR_kwDODunzps4yGKoj
3,683
added told-br (brazilian hate speech) dataset
{ "login": "JAugusto97", "id": 26556320, "node_id": "MDQ6VXNlcjI2NTU2MzIw", "avatar_url": "https://avatars.githubusercontent.com/u/26556320?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JAugusto97", "html_url": "https://github.com/JAugusto97", "followers_url": "https://api.github.com/users/JAugusto97/followers", "following_url": "https://api.github.com/users/JAugusto97/following{/other_user}", "gists_url": "https://api.github.com/users/JAugusto97/gists{/gist_id}", "starred_url": "https://api.github.com/users/JAugusto97/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JAugusto97/subscriptions", "organizations_url": "https://api.github.com/users/JAugusto97/orgs", "repos_url": "https://api.github.com/users/JAugusto97/repos", "events_url": "https://api.github.com/users/JAugusto97/events{/privacy}", "received_events_url": "https://api.github.com/users/JAugusto97/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2022-02-04T17:44:32
2022-02-07T21:14:52
2022-02-07T21:14:52
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3683", "html_url": "https://github.com/huggingface/datasets/pull/3683", "diff_url": "https://github.com/huggingface/datasets/pull/3683.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3683.patch", "merged_at": "2022-02-07T21:14:52" }
Hey, Adding ToLD-Br. Feel free to ask for modifications. Thanks!!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3683/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3683/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3682
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3682/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3682/comments
https://api.github.com/repos/huggingface/datasets/issues/3682/events
https://github.com/huggingface/datasets/pull/3682
1,124,434,330
PR_kwDODunzps4yGFml
3,682
adding told-br for toxic/abusive hatespeech detection
{ "login": "JAugusto97", "id": 26556320, "node_id": "MDQ6VXNlcjI2NTU2MzIw", "avatar_url": "https://avatars.githubusercontent.com/u/26556320?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JAugusto97", "html_url": "https://github.com/JAugusto97", "followers_url": "https://api.github.com/users/JAugusto97/followers", "following_url": "https://api.github.com/users/JAugusto97/following{/other_user}", "gists_url": "https://api.github.com/users/JAugusto97/gists{/gist_id}", "starred_url": "https://api.github.com/users/JAugusto97/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JAugusto97/subscriptions", "organizations_url": "https://api.github.com/users/JAugusto97/orgs", "repos_url": "https://api.github.com/users/JAugusto97/repos", "events_url": "https://api.github.com/users/JAugusto97/events{/privacy}", "received_events_url": "https://api.github.com/users/JAugusto97/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2022-02-04T17:18:29
2022-02-07T03:23:24
2022-02-04T17:36:40
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3682", "html_url": "https://github.com/huggingface/datasets/pull/3682", "diff_url": "https://github.com/huggingface/datasets/pull/3682.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3682.patch", "merged_at": null }
Hey, I'm adding our dataset from our paper published at AACL 2020. Feel free to ask for modifications. Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3682/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3682/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3681
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3681/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3681/comments
https://api.github.com/repos/huggingface/datasets/issues/3681/events
https://github.com/huggingface/datasets/pull/3681
1,124,237,458
PR_kwDODunzps4yFcpM
3,681
Fix TestCommand to move dataset_infos instead of copying
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
5
2022-02-04T14:01:52
2022-02-04T18:47:16
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3681", "html_url": "https://github.com/huggingface/datasets/pull/3681", "diff_url": "https://github.com/huggingface/datasets/pull/3681.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3681.patch", "merged_at": null }
Why do we copy instead of moving the file? CC: @lhoestq @lvwerra
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3681/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3681/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3680
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3680/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3680/comments
https://api.github.com/repos/huggingface/datasets/issues/3680/events
https://github.com/huggingface/datasets/pull/3680
1,124,213,416
PR_kwDODunzps4yFXm8
3,680
Fix TestCommand to copy dataset_infos to local dir with only data files
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2022-02-04T13:36:46
2022-02-08T10:32:55
2022-02-08T10:32:55
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3680", "html_url": "https://github.com/huggingface/datasets/pull/3680", "diff_url": "https://github.com/huggingface/datasets/pull/3680.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3680.patch", "merged_at": "2022-02-08T10:32:55" }
Currently this case is missed. CC: @lvwerra
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3680/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3680/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3679
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3679/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3679/comments
https://api.github.com/repos/huggingface/datasets/issues/3679/events
https://github.com/huggingface/datasets/issues/3679
1,124,062,133
I_kwDODunzps5C_9O1
3,679
Download datasets from a private hub
{ "login": "juliensimon", "id": 3436143, "node_id": "MDQ6VXNlcjM0MzYxNDM=", "avatar_url": "https://avatars.githubusercontent.com/u/3436143?v=4", "gravatar_id": "", "url": "https://api.github.com/users/juliensimon", "html_url": "https://github.com/juliensimon", "followers_url": "https://api.github.com/users/juliensimon/followers", "following_url": "https://api.github.com/users/juliensimon/following{/other_user}", "gists_url": "https://api.github.com/users/juliensimon/gists{/gist_id}", "starred_url": "https://api.github.com/users/juliensimon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/juliensimon/subscriptions", "organizations_url": "https://api.github.com/users/juliensimon/orgs", "repos_url": "https://api.github.com/users/juliensimon/repos", "events_url": "https://api.github.com/users/juliensimon/events{/privacy}", "received_events_url": "https://api.github.com/users/juliensimon/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 3814924348, "node_id": "LA_kwDODunzps7jYyA8", "url": "https://api.github.com/repos/huggingface/datasets/labels/private-hub", "name": "private-hub", "color": "A929D8", "default": false, "description": "" } ]
open
false
null
[]
null
2
2022-02-04T10:49:06
2022-02-09T15:04:25
null
NONE
null
null
null
In the context of a private hub deployment, customers would like to use load_dataset() to load datasets from their hub, not from the public hub. This doesn't seem to be configurable at the moment and it would be nice to add this feature. The obvious workaround is to clone the repo first and then load it from local storage, but this adds an extra step. It'd be great to have the same experience regardless of where the hub is hosted. The same issue exists with the transformers library and the CLI. I'm going to create issues there as well, and I'll reference them below.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3679/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3679/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3678
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3678/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3678/comments
https://api.github.com/repos/huggingface/datasets/issues/3678/events
https://github.com/huggingface/datasets/pull/3678
1,123,402,426
PR_kwDODunzps4yCt91
3,678
Add code example in wikipedia card
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2022-02-03T18:09:02
2022-02-04T13:21:39
2022-02-04T13:21:39
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3678", "html_url": "https://github.com/huggingface/datasets/pull/3678", "diff_url": "https://github.com/huggingface/datasets/pull/3678.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3678.patch", "merged_at": "2022-02-04T13:21:39" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3678/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3678/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3677
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3677/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3677/comments
https://api.github.com/repos/huggingface/datasets/issues/3677/events
https://github.com/huggingface/datasets/issues/3677
1,123,192,866
I_kwDODunzps5C8pAi
3,677
Discovery cannot be streamed anymore
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
2
2022-02-03T15:02:03
2022-02-10T16:51:24
2022-02-10T16:51:24
CONTRIBUTOR
null
null
null
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python from datasets import load_dataset iterable_dataset = load_dataset("discovery", name="discovery", split="train", streaming=True) list(iterable_dataset.take(1)) ``` ## Expected results The first row of the train split. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 365, in __iter__ for key, example in self._iter(): File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 362, in _iter yield from ex_iterable File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 272, in __iter__ yield from islice(self.ex_iterable, self.n) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 79, in __iter__ yield from self.generate_examples_fn(**self.kwargs) File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/discovery/542fab7a9ddc1d9726160355f7baa06a1ccc44c40bc8e12c09e9bc743aca43a2/discovery.py", line 333, in _generate_examples with open(data_file, encoding="utf8") as f: File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 64, in wrapper return function(*args, use_auth_token=use_auth_token, **kwargs) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py", line 369, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/core.py", line 456, in open return open_files( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/core.py", line 288, in open_files fs, fs_token, paths = get_fs_token_paths( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/core.py", line 611, in get_fs_token_paths fs = filesystem(protocol, **inkwargs) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 253, in filesystem return cls(**storage_options) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 68, in __call__ obj = super().__call__(*args, **kwargs) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py", line 57, in __init__ self.zip = zipfile.ZipFile(self.fo) File "/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/zipfile.py", line 1257, in __init__ self._RealGetContents() File "/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/zipfile.py", line 1320, in _RealGetContents endrec = _EndRecData(fp) File "/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/zipfile.py", line 263, in _EndRecData fpin.seek(0, 2) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 676, in seek raise ValueError("Cannot seek streaming HTTP file") ValueError: Cannot seek streaming HTTP file ``` ## Environment info - `datasets` version: 1.18.3 - Platform: Linux-5.11.0-1027-aws-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3677/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3677/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3676
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3676/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3676/comments
https://api.github.com/repos/huggingface/datasets/issues/3676/events
https://github.com/huggingface/datasets/issues/3676
1,123,096,362
I_kwDODunzps5C8Rcq
3,676
`None` replaced by `[]` after first batch in map
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
2
2022-02-03T13:36:48
2022-02-03T16:30:52
null
MEMBER
null
null
null
Sometimes `None` can be replaced by `[]` when running map: ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(4)}) ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"]) print(ds.to_pandas()) # b # 0 [None, [0]] # 1 [[], [0]] # 2 [[], [0]] # 3 [[], [0]] ``` This issue has been experienced when running the `run_qa.py` example from `transformers` (see issue https://github.com/huggingface/transformers/issues/15401) This can be due to a bug in when casting `None` in nested lists. Casting only happens after the first batch, since the first batch is used to infer the feature types. cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3676/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/3676/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3675
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3675/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3675/comments
https://api.github.com/repos/huggingface/datasets/issues/3675/events
https://github.com/huggingface/datasets/issues/3675
1,123,078,408
I_kwDODunzps5C8NEI
3,675
Add CodeContests dataset
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
1
2022-02-03T13:20:00
2022-02-10T20:50:38
null
CONTRIBUTOR
null
null
null
## Adding a Dataset - **Name:** CodeContests - **Description:** CodeContests is a competitive programming dataset for machine-learning. - **Paper:** - **Data:** https://github.com/deepmind/code_contests - **Motivation:** This dataset was used when training [AlphaCode](https://deepmind.com/blog/article/Competitive-programming-with-AlphaCode). Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3675/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3675/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3674
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3674/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3674/comments
https://api.github.com/repos/huggingface/datasets/issues/3674/events
https://github.com/huggingface/datasets/pull/3674
1,123,027,874
PR_kwDODunzps4yBe17
3,674
Add FrugalScore metric
{ "login": "moussaKam", "id": 28675016, "node_id": "MDQ6VXNlcjI4Njc1MDE2", "avatar_url": "https://avatars.githubusercontent.com/u/28675016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/moussaKam", "html_url": "https://github.com/moussaKam", "followers_url": "https://api.github.com/users/moussaKam/followers", "following_url": "https://api.github.com/users/moussaKam/following{/other_user}", "gists_url": "https://api.github.com/users/moussaKam/gists{/gist_id}", "starred_url": "https://api.github.com/users/moussaKam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/moussaKam/subscriptions", "organizations_url": "https://api.github.com/users/moussaKam/orgs", "repos_url": "https://api.github.com/users/moussaKam/repos", "events_url": "https://api.github.com/users/moussaKam/events{/privacy}", "received_events_url": "https://api.github.com/users/moussaKam/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
2022-02-03T12:28:52
2022-02-08T15:28:56
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3674", "html_url": "https://github.com/huggingface/datasets/pull/3674", "diff_url": "https://github.com/huggingface/datasets/pull/3674.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3674.patch", "merged_at": null }
This pull request add FrugalScore metric for NLG systems evaluation. FrugalScore is a reference-based metric for NLG models evaluation. It is based on a distillation approach that allows to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance. Paper: https://arxiv.org/abs/2110.08559?context=cs Github: https://github.com/moussaKam/FrugalScore @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3674/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3674/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3673
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3673/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3673/comments
https://api.github.com/repos/huggingface/datasets/issues/3673/events
https://github.com/huggingface/datasets/issues/3673
1,123,010,520
I_kwDODunzps5C78fY
3,673
`load_dataset("snli")` is different from dataset viewer
{ "login": "pietrolesci", "id": 61748653, "node_id": "MDQ6VXNlcjYxNzQ4NjUz", "avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pietrolesci", "html_url": "https://github.com/pietrolesci", "followers_url": "https://api.github.com/users/pietrolesci/followers", "following_url": "https://api.github.com/users/pietrolesci/following{/other_user}", "gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}", "starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions", "organizations_url": "https://api.github.com/users/pietrolesci/orgs", "repos_url": "https://api.github.com/users/pietrolesci/repos", "events_url": "https://api.github.com/users/pietrolesci/events{/privacy}", "received_events_url": "https://api.github.com/users/pietrolesci/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
9
2022-02-03T12:10:43
2022-02-11T17:01:21
2022-02-11T17:01:21
NONE
null
null
null
## Describe the bug The dataset that is downloaded from the Hub via `load_dataset("snli")` is different from what is available in the dataset viewer. In the viewer the labels are not encoded (i.e., "neutral", "entailment", "contradiction"), while the downloaded dataset shows the encoded labels (i.e., 0, 1, 2). Is this expected? ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Ubuntu 20.4 - Python version: 3.7
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3673/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3673/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3672
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3672/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3672/comments
https://api.github.com/repos/huggingface/datasets/issues/3672/events
https://github.com/huggingface/datasets/pull/3672
1,122,980,556
PR_kwDODunzps4yBUrZ
3,672
Prioritize `module.builder_kwargs` over defaults in `TestCommand`
{ "login": "lvwerra", "id": 8264887, "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lvwerra", "html_url": "https://github.com/lvwerra", "followers_url": "https://api.github.com/users/lvwerra/followers", "following_url": "https://api.github.com/users/lvwerra/following{/other_user}", "gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}", "starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions", "organizations_url": "https://api.github.com/users/lvwerra/orgs", "repos_url": "https://api.github.com/users/lvwerra/repos", "events_url": "https://api.github.com/users/lvwerra/events{/privacy}", "received_events_url": "https://api.github.com/users/lvwerra/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2022-02-03T11:38:42
2022-02-04T12:37:20
2022-02-04T12:37:19
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3672", "html_url": "https://github.com/huggingface/datasets/pull/3672", "diff_url": "https://github.com/huggingface/datasets/pull/3672.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3672.patch", "merged_at": "2022-02-04T12:37:19" }
This fixes a bug in the `TestCommand` where multiple kwargs for `name` were passed if it was set in both default and `module.builder_kwargs`. Example error: ```Python Traceback (most recent call last): File "create_metadata.py", line 96, in <module> main(**vars(args)) File "create_metadata.py", line 86, in main metadata_command.run() File "/opt/conda/lib/python3.7/site-packages/datasets/commands/test.py", line 144, in run for j, builder in enumerate(get_builders()): File "/opt/conda/lib/python3.7/site-packages/datasets/commands/test.py", line 141, in get_builders name=name, cache_dir=self._cache_dir, data_dir=self._data_dir, **module.builder_kwargs TypeError: type object got multiple values for keyword argument 'name' ``` Let me know what you think.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3672/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3672/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3671
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3671/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3671/comments
https://api.github.com/repos/huggingface/datasets/issues/3671/events
https://github.com/huggingface/datasets/issues/3671
1,122,864,253
I_kwDODunzps5C7Yx9
3,671
Give an estimate of the dataset size in DatasetInfo
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
0
2022-02-03T09:47:10
2022-02-03T09:47:10
null
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** Currently, only part of the datasets provide `dataset_size`, `download_size`, `size_in_bytes` (and `num_bytes` and `num_examples` inside `splits`). I would want to get this information, or an estimation, for all the datasets. **Describe the solution you'd like** - get access to the git information for the dataset files hosted on the hub - look at the [`Content-Length`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Length) for the files served by HTTP
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3671/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3671/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3670
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3670/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3670/comments
https://api.github.com/repos/huggingface/datasets/issues/3670/events
https://github.com/huggingface/datasets/pull/3670
1,122,439,827
PR_kwDODunzps4x_kBx
3,670
feat: 🎸 generate info if dataset_infos.json does not exist
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
2022-02-02T22:11:56
2022-02-11T20:24:35
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3670", "html_url": "https://github.com/huggingface/datasets/pull/3670", "diff_url": "https://github.com/huggingface/datasets/pull/3670.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3670.patch", "merged_at": null }
in get_dataset_infos(). Also: add the `use_auth_token` parameter, and create get_dataset_config_info() ✅ Closes: #3013
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3670/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3670/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3669
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3669/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3669/comments
https://api.github.com/repos/huggingface/datasets/issues/3669/events
https://github.com/huggingface/datasets/pull/3669
1,122,335,622
PR_kwDODunzps4x_OTI
3,669
Common voice validated partition
{ "login": "shalymin-amzn", "id": 98762373, "node_id": "U_kgDOBeL-hQ", "avatar_url": "https://avatars.githubusercontent.com/u/98762373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shalymin-amzn", "html_url": "https://github.com/shalymin-amzn", "followers_url": "https://api.github.com/users/shalymin-amzn/followers", "following_url": "https://api.github.com/users/shalymin-amzn/following{/other_user}", "gists_url": "https://api.github.com/users/shalymin-amzn/gists{/gist_id}", "starred_url": "https://api.github.com/users/shalymin-amzn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shalymin-amzn/subscriptions", "organizations_url": "https://api.github.com/users/shalymin-amzn/orgs", "repos_url": "https://api.github.com/users/shalymin-amzn/repos", "events_url": "https://api.github.com/users/shalymin-amzn/events{/privacy}", "received_events_url": "https://api.github.com/users/shalymin-amzn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
7
2022-02-02T20:04:43
2022-02-08T17:26:52
2022-02-08T17:23:12
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3669", "html_url": "https://github.com/huggingface/datasets/pull/3669", "diff_url": "https://github.com/huggingface/datasets/pull/3669.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3669.patch", "merged_at": "2022-02-08T17:23:12" }
This patch adds access to the 'validated' partitions of CommonVoice datasets (provided by the dataset creators but not available in the HuggingFace interface yet). As 'validated' contains significantly more data than 'train' (although it contains both test and validation, so one needs to be careful there), it can be useful to train better models where no strict comparison with the previous work is intended.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3669/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3669/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3668
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3668/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3668/comments
https://api.github.com/repos/huggingface/datasets/issues/3668/events
https://github.com/huggingface/datasets/issues/3668
1,122,261,736
I_kwDODunzps5C5Fro
3,668
Couldn't cast array of type string error with cast_column
{ "login": "R4ZZ3", "id": 25264037, "node_id": "MDQ6VXNlcjI1MjY0MDM3", "avatar_url": "https://avatars.githubusercontent.com/u/25264037?v=4", "gravatar_id": "", "url": "https://api.github.com/users/R4ZZ3", "html_url": "https://github.com/R4ZZ3", "followers_url": "https://api.github.com/users/R4ZZ3/followers", "following_url": "https://api.github.com/users/R4ZZ3/following{/other_user}", "gists_url": "https://api.github.com/users/R4ZZ3/gists{/gist_id}", "starred_url": "https://api.github.com/users/R4ZZ3/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/R4ZZ3/subscriptions", "organizations_url": "https://api.github.com/users/R4ZZ3/orgs", "repos_url": "https://api.github.com/users/R4ZZ3/repos", "events_url": "https://api.github.com/users/R4ZZ3/events{/privacy}", "received_events_url": "https://api.github.com/users/R4ZZ3/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2022-02-02T18:33:29
2022-02-09T07:07:42
2022-02-09T07:07:42
NONE
null
null
null
## Describe the bug In OVH cloud during Huggingface Robust-speech-recognition event on a AI training notebook instance using jupyter lab and running jupyter notebook When using the dataset.cast_column("audio",Audio(sampling_rate=16_000)) method I get error ![image](https://user-images.githubusercontent.com/25264037/152214027-9c42a71a-dd24-463c-a346-57e0287e5a8f.png) This was working with datasets version 1.17.1.dev0 but now with version 1.18.3 produces the error above. ## Steps to reproduce the bug load dataset: ![image](https://user-images.githubusercontent.com/25264037/152216145-159553b6-cddc-4f0b-8607-7e76b600e22a.png) remove columns: ![image](https://user-images.githubusercontent.com/25264037/152214707-7c7e89d1-87d8-4b4f-8cfc-5d7223d35644.png) run my fix_path function. This also creates the audio column that is referring to the absolute file path of the audio ![image](https://user-images.githubusercontent.com/25264037/152214773-51f71ccf-d31b-4449-b63a-1af56436e49f.png) Then I concatenate few other datasets and finally try the cast_column method ![image](https://user-images.githubusercontent.com/25264037/152215032-f341ec86-9d6d-48c9-943b-e2efe37a4d98.png) but get error: ![image](https://user-images.githubusercontent.com/25264037/152215073-b85bd057-98e8-413c-9b05-51e9805f2c24.png) ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: OVH Cloud, AI Training section, container for Huggingface Robust Speech Recognition event image(baaastijn/ovh_huggingface) ![image](https://user-images.githubusercontent.com/25264037/152215161-b4ff7bfb-2736-4afb-9223-761a3338d23c.png) - Python version: 3.8.8 - PyArrow version: ![image](https://user-images.githubusercontent.com/25264037/152215936-4d365760-557e-456b-b5eb-ad1d15cf5073.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3668/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3668/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3667
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3667/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3667/comments
https://api.github.com/repos/huggingface/datasets/issues/3667/events
https://github.com/huggingface/datasets/pull/3667
1,122,060,630
PR_kwDODunzps4x-Ujt
3,667
Process .opus files with torchaudio
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false } ]
null
4
2022-02-02T15:23:14
2022-02-04T15:29:38
2022-02-04T15:29:38
CONTRIBUTOR
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3667", "html_url": "https://github.com/huggingface/datasets/pull/3667", "diff_url": "https://github.com/huggingface/datasets/pull/3667.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3667.patch", "merged_at": null }
@anton-l suggested to proccess .opus files with `torchaudio` instead of `soundfile` as it's faster: ![opus](https://user-images.githubusercontent.com/16348744/152177816-2df6076c-f28b-4aef-a08d-b499b921414d.png) (moreover, I didn't manage to load .opus files with `soundfile` / `librosa` locally on any my machine anyway for some reason, even with `ffmpeg` installed). For now my current changes work with locally stored file: ```python # download sample opus file (from MultilingualSpokenWords dataset) !wget https://huggingface.co/datasets/polinaeterna/test_opus/resolve/main/common_voice_tt_17737010.opus from datasets import Dataset, Audio audio_path = "common_voice_tt_17737010.opus" dataset = Dataset.from_dict({"audio": [audio_path]}).cast_column("audio", Audio(48000)) dataset[0] # {'audio': {'path': 'common_voice_tt_17737010.opus', # 'array': array([ 0.0000000e+00, 0.0000000e+00, 3.0517578e-05, ..., # -6.1035156e-05, 6.1035156e-05, 0.0000000e+00], dtype=float32), # 'sampling_rate': 48000}} ``` But it doesn't work when loading inside s dataset from bytes (I checked on [MultilingualSpokenWords](https://github.com/huggingface/datasets/pull/3666), the PR is a draft now, maybe the bug is somewhere there ) ```python import torchaudio with open(audio_path, "rb") as b: print(torchaudio.load(b)) # RuntimeError: Error loading audio file: failed to open file <in memory buffer> ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3667/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3667/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3666
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3666/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3666/comments
https://api.github.com/repos/huggingface/datasets/issues/3666/events
https://github.com/huggingface/datasets/pull/3666
1,122,058,894
PR_kwDODunzps4x-ULz
3,666
Multilingual Spoken Words
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2022-02-02T15:21:48
2022-02-11T17:30:28
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3666", "html_url": "https://github.com/huggingface/datasets/pull/3666", "diff_url": "https://github.com/huggingface/datasets/pull/3666.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3666.patch", "merged_at": null }
Add [Multillingual Spoken Words dataset](https://mlcommons.org/en/multilingual-spoken-words/) You can specify multiple languages for downloading 😌: ```python ds = load_dataset("datasets/ml_spoken_words", languages=["ar", "tt"]) ``` 1. I didn't take into account that each time you pass a set of languages the data for a specific language is downloaded even if it was downloaded before (since these are custom configs like `ar+tt` and `ar+tt+br`. Maybe that wasn't a good idea? 2. The script will have to be slightly changed after merge of https://github.com/huggingface/datasets/pull/3664 2. Just can't figure out what wrong with dummy files... 😞 Maybe we should get rid of them at some point 😁
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3666/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3666/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3665
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3665/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3665/comments
https://api.github.com/repos/huggingface/datasets/issues/3665/events
https://github.com/huggingface/datasets/pull/3665
1,121,753,385
PR_kwDODunzps4x9TnU
3,665
Fix MP3 resampling when a dataset's audio files have different sampling rates
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2022-02-02T10:31:45
2022-02-02T10:52:26
2022-02-02T10:52:26
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3665", "html_url": "https://github.com/huggingface/datasets/pull/3665", "diff_url": "https://github.com/huggingface/datasets/pull/3665.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3665.patch", "merged_at": "2022-02-02T10:52:25" }
The resampler needs to be updated if the `orig_freq` doesn't match the audio file sampling rate Fix https://github.com/huggingface/datasets/issues/3662
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3665/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3665/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3664
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3664/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3664/comments
https://api.github.com/repos/huggingface/datasets/issues/3664/events
https://github.com/huggingface/datasets/pull/3664
1,121,233,301
PR_kwDODunzps4x7mg_
3,664
[WIP] Return local paths to Common Voice
{ "login": "anton-l", "id": 26864830, "node_id": "MDQ6VXNlcjI2ODY0ODMw", "avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anton-l", "html_url": "https://github.com/anton-l", "followers_url": "https://api.github.com/users/anton-l/followers", "following_url": "https://api.github.com/users/anton-l/following{/other_user}", "gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}", "starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anton-l/subscriptions", "organizations_url": "https://api.github.com/users/anton-l/orgs", "repos_url": "https://api.github.com/users/anton-l/repos", "events_url": "https://api.github.com/users/anton-l/events{/privacy}", "received_events_url": "https://api.github.com/users/anton-l/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
16
2022-02-01T21:48:27
2022-02-11T23:32:08
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3664", "html_url": "https://github.com/huggingface/datasets/pull/3664", "diff_url": "https://github.com/huggingface/datasets/pull/3664.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3664.patch", "merged_at": null }
Fixes https://github.com/huggingface/datasets/issues/3663 This is a proposed way of returning the old local file-based generator while keeping the new streaming generator intact. TODO: - [ ] brainstorm a bit more on https://github.com/huggingface/datasets/issues/3663 to see if we can do better - [ ] refactor the heck out of this PR to avoid completely copying the logic between the two generators
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3664/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3664/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3663
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3663/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3663/comments
https://api.github.com/repos/huggingface/datasets/issues/3663/events
https://github.com/huggingface/datasets/issues/3663
1,121,067,647
I_kwDODunzps5C0iJ_
3,663
[Audio] Path of Common Voice cannot be used for audio loading anymore
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }, { "login": "anton-l", "id": 26864830, "node_id": "MDQ6VXNlcjI2ODY0ODMw", "avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anton-l", "html_url": "https://github.com/anton-l", "followers_url": "https://api.github.com/users/anton-l/followers", "following_url": "https://api.github.com/users/anton-l/following{/other_user}", "gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}", "starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anton-l/subscriptions", "organizations_url": "https://api.github.com/users/anton-l/orgs", "repos_url": "https://api.github.com/users/anton-l/repos", "events_url": "https://api.github.com/users/anton-l/events{/privacy}", "received_events_url": "https://api.github.com/users/anton-l/received_events", "type": "User", "site_admin": false }, { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
6
2022-02-01T18:40:10
2022-02-08T16:05:18
null
MEMBER
null
null
null
## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(ds[0]["audio"]["path"]) load(ds[0]["path"]) ``` ## Expected results The path should be the complete absolute path to the downloaded audio file not some relative path. ## Actual results ```bash ~/hugging_face/venv_3.9/lib/python3.9/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format) 150 filepath, frame_offset, num_frames, normalize, channels_first, format) 151 filepath = os.fspath(filepath) --> 152 return torch.ops.torchaudio.sox_io_load_audio_file( 153 filepath, frame_offset, num_frames, normalize, channels_first, format) 154 RuntimeError: Error loading audio file: failed to open file cv-corpus-6.1-2020-12-11/ab/clips/common_voice_ab_19904194.mp3 ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3.dev0 - Platform: Linux-5.4.0-96-generic-x86_64-with-glibc2.27 - Python version: 3.9.1 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3663/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3663/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3662
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3662/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3662/comments
https://api.github.com/repos/huggingface/datasets/issues/3662/events
https://github.com/huggingface/datasets/issues/3662
1,121,024,403
I_kwDODunzps5C0XmT
3,662
[Audio] MP3 resampling is incorrect when dataset's audio files have different sampling rates
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
6
2022-02-01T17:55:04
2022-02-02T10:52:25
2022-02-02T10:52:25
MEMBER
null
null
null
The Audio feature resampler for MP3 gets stuck with the first original frequencies it meets, which leads to subsequent decoding to be incorrect. Here is a code to reproduce the issue: Let's first consider two audio files with different sampling rates 32000 and 16000: ```python # first download a mp3 file with sampling_rate=32000 !wget https://file-examples-com.github.io/uploads/2017/11/file_example_MP3_700KB.mp3 import torchaudio audio_path = "file_example_MP3_700KB.mp3" audio_path2 = audio_path.replace(".mp3", "_resampled.mp3") resample = torchaudio.transforms.Resample(32000, 16000) # create a new file with sampling_rate=16000 torchaudio.save(audio_path2, resample(torchaudio.load(audio_path)[0]), 16000) ``` Then we can see an issue here when decoding: ```python from datasets import Dataset, Audio dataset = Dataset.from_dict({"audio": [audio_path, audio_path2]}).cast_column("audio", Audio(48000)) dataset[0] # decode the first audio file sets the resampler orig_freq to 32000 print(dataset .features["audio"]._resampler.orig_freq) # 32000 print(dataset[0]["audio"]["array"].shape) # here decoding is fine # (1308096,) dataset = Dataset.from_dict({"audio": [audio_path, audio_path2]}).cast_column("audio", Audio(48000)) dataset[1] # decode the second audio file sets the resampler orig_freq to 16000 print(dataset .features["audio"]._resampler.orig_freq) # 16000 print(dataset[0]["audio"]["array"].shape) # here decoding uses orig_freq=16000 instead of 32000 # (2616192,) ``` The value of `orig_freq` doesn't change no matter what file needs to be decoded cc @patrickvonplaten @anton-l @cahya-wirawan @albertvillanova The issue seems to be here in `Audio.decode_mp3`: https://github.com/huggingface/datasets/blob/4c417d52def6e20359ca16c6723e0a2855e5c3fd/src/datasets/features/audio.py#L176-L180
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3662/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3662/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3661
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3661/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3661/comments
https://api.github.com/repos/huggingface/datasets/issues/3661/events
https://github.com/huggingface/datasets/pull/3661
1,121,000,251
PR_kwDODunzps4x61ad
3,661
Remove unnecessary 'r' arg in
{ "login": "bryant1410", "id": 3905501, "node_id": "MDQ6VXNlcjM5MDU1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bryant1410", "html_url": "https://github.com/bryant1410", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "repos_url": "https://api.github.com/users/bryant1410/repos", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2022-02-01T17:29:27
2022-02-07T16:57:27
2022-02-07T16:02:42
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3661", "html_url": "https://github.com/huggingface/datasets/pull/3661", "diff_url": "https://github.com/huggingface/datasets/pull/3661.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3661.patch", "merged_at": "2022-02-07T16:02:42" }
Originally from #3489
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3661/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3661/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3660
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3660/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3660/comments
https://api.github.com/repos/huggingface/datasets/issues/3660/events
https://github.com/huggingface/datasets/pull/3660
1,120,982,671
PR_kwDODunzps4x6xr8
3,660
Change HTTP links to HTTPS
{ "login": "bryant1410", "id": 3905501, "node_id": "MDQ6VXNlcjM5MDU1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bryant1410", "html_url": "https://github.com/bryant1410", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "repos_url": "https://api.github.com/users/bryant1410/repos", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2022-02-01T17:12:51
2022-02-01T18:34:47
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3660", "html_url": "https://github.com/huggingface/datasets/pull/3660", "diff_url": "https://github.com/huggingface/datasets/pull/3660.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3660.patch", "merged_at": null }
I tested the links. I also fixed some typos. Originally from #3489
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3660/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3660/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3659
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3659/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3659/comments
https://api.github.com/repos/huggingface/datasets/issues/3659/events
https://github.com/huggingface/datasets/issues/3659
1,120,913,672
I_kwDODunzps5Cz8kI
3,659
push_to_hub but preview not working
{ "login": "thomas-happify", "id": 66082334, "node_id": "MDQ6VXNlcjY2MDgyMzM0", "avatar_url": "https://avatars.githubusercontent.com/u/66082334?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomas-happify", "html_url": "https://github.com/thomas-happify", "followers_url": "https://api.github.com/users/thomas-happify/followers", "following_url": "https://api.github.com/users/thomas-happify/following{/other_user}", "gists_url": "https://api.github.com/users/thomas-happify/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomas-happify/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomas-happify/subscriptions", "organizations_url": "https://api.github.com/users/thomas-happify/orgs", "repos_url": "https://api.github.com/users/thomas-happify/repos", "events_url": "https://api.github.com/users/thomas-happify/events{/privacy}", "received_events_url": "https://api.github.com/users/thomas-happify/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
2022-02-01T16:23:57
2022-02-09T08:00:37
2022-02-09T08:00:37
NONE
null
null
null
## Dataset viewer issue for '*happifyhealth/twitter_pnn*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/happifyhealth/twitter_pnn)* I used ``` dataset.push_to_hub("happifyhealth/twitter_pnn") ``` but the preview is not working. Am I the one who added this dataset ? Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3659/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3659/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3658
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3658/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3658/comments
https://api.github.com/repos/huggingface/datasets/issues/3658/events
https://github.com/huggingface/datasets/issues/3658
1,120,880,395
I_kwDODunzps5Cz0cL
3,658
Dataset viewer issue for *P3*
{ "login": "jeffistyping", "id": 22351555, "node_id": "MDQ6VXNlcjIyMzUxNTU1", "avatar_url": "https://avatars.githubusercontent.com/u/22351555?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jeffistyping", "html_url": "https://github.com/jeffistyping", "followers_url": "https://api.github.com/users/jeffistyping/followers", "following_url": "https://api.github.com/users/jeffistyping/following{/other_user}", "gists_url": "https://api.github.com/users/jeffistyping/gists{/gist_id}", "starred_url": "https://api.github.com/users/jeffistyping/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jeffistyping/subscriptions", "organizations_url": "https://api.github.com/users/jeffistyping/orgs", "repos_url": "https://api.github.com/users/jeffistyping/repos", "events_url": "https://api.github.com/users/jeffistyping/events{/privacy}", "received_events_url": "https://api.github.com/users/jeffistyping/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
open
false
null
[]
null
0
2022-02-01T15:57:56
2022-02-01T15:57:56
null
NONE
null
null
null
## Dataset viewer issue for '*P3*' **Link: https://huggingface.co/datasets/bigscience/P3** ``` Status code: 400 Exception: SplitsNotFoundError Message: The split names could not be parsed from the dataset config. ``` Am I the one who added this dataset ? No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3658/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3658/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3657
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3657/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3657/comments
https://api.github.com/repos/huggingface/datasets/issues/3657/events
https://github.com/huggingface/datasets/pull/3657
1,120,602,620
PR_kwDODunzps4x5f1I
3,657
Extend dataset builder for streaming in `get_dataset_split_names`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2022-02-01T12:21:24
2022-02-03T22:49:06
2022-02-02T11:22:01
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3657", "html_url": "https://github.com/huggingface/datasets/pull/3657", "diff_url": "https://github.com/huggingface/datasets/pull/3657.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3657.patch", "merged_at": "2022-02-02T11:22:01" }
Currently, `get_dataset_split_names` doesn't extend a builder module to support streaming, even though it uses `StreamingDownloadManager` to download data. This PR fixes that. To test the change, run the following: ```bash pip install git+https://github.com/huggingface/datasets.git@fix-get_dataset_split_names-streaming python -c "from datasets import get_dataset_split_names; print(get_dataset_split_names('facebook/multilingual_librispeech', 'german', download_mode='force_redownload', revision='137923f945552c6afdd8b60e4a7b43e3088972c1'))" ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3657/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3657/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3656
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3656/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3656/comments
https://api.github.com/repos/huggingface/datasets/issues/3656/events
https://github.com/huggingface/datasets/issues/3656
1,120,510,823
I_kwDODunzps5CyaNn
3,656
checksum error subjqa dataset
{ "login": "RensDimmendaal", "id": 9828683, "node_id": "MDQ6VXNlcjk4Mjg2ODM=", "avatar_url": "https://avatars.githubusercontent.com/u/9828683?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RensDimmendaal", "html_url": "https://github.com/RensDimmendaal", "followers_url": "https://api.github.com/users/RensDimmendaal/followers", "following_url": "https://api.github.com/users/RensDimmendaal/following{/other_user}", "gists_url": "https://api.github.com/users/RensDimmendaal/gists{/gist_id}", "starred_url": "https://api.github.com/users/RensDimmendaal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RensDimmendaal/subscriptions", "organizations_url": "https://api.github.com/users/RensDimmendaal/orgs", "repos_url": "https://api.github.com/users/RensDimmendaal/repos", "events_url": "https://api.github.com/users/RensDimmendaal/events{/privacy}", "received_events_url": "https://api.github.com/users/RensDimmendaal/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
2
2022-02-01T10:53:33
2022-02-10T10:56:59
2022-02-10T10:56:38
NONE
null
null
null
## Describe the bug I get a checksum error when loading the `subjqa` dataset (used in the transformers book). ## Steps to reproduce the bug ```python from datasets import load_dataset subjqa = load_dataset("subjqa","electronics") ``` ## Expected results Loading the dataset ## Actual results ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-2-d2857d460155> in <module>() 2 from datasets import load_dataset 3 ----> 4 subjqa = load_dataset("subjqa","electronics") 3 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 38 if len(bad_urls) > 0: 39 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 41 logger.info("All the checksums matched successfully" + for_verification_name) 42 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/lewtun/SubjQA/archive/refs/heads/master.zip'] ``` ## Environment info Google colab - `datasets` version: 1.18.2 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3656/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3656/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3655
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3655/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3655/comments
https://api.github.com/repos/huggingface/datasets/issues/3655/events
https://github.com/huggingface/datasets/issues/3655
1,119,801,077
I_kwDODunzps5Cvs71
3,655
Pubmed dataset not reachable
{ "login": "abhi-mosaic", "id": 77638579, "node_id": "MDQ6VXNlcjc3NjM4NTc5", "avatar_url": "https://avatars.githubusercontent.com/u/77638579?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhi-mosaic", "html_url": "https://github.com/abhi-mosaic", "followers_url": "https://api.github.com/users/abhi-mosaic/followers", "following_url": "https://api.github.com/users/abhi-mosaic/following{/other_user}", "gists_url": "https://api.github.com/users/abhi-mosaic/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhi-mosaic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhi-mosaic/subscriptions", "organizations_url": "https://api.github.com/users/abhi-mosaic/orgs", "repos_url": "https://api.github.com/users/abhi-mosaic/repos", "events_url": "https://api.github.com/users/abhi-mosaic/events{/privacy}", "received_events_url": "https://api.github.com/users/abhi-mosaic/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
2
2022-01-31T18:45:47
2022-02-11T15:54:06
null
NONE
null
null
null
## Describe the bug Trying to use the `pubmed` dataset fails to reach / download the source files. ## Steps to reproduce the bug ```python pubmed_train = datasets.load_dataset('pubmed', split='train') ``` ## Expected results Should begin downloading the pubmed dataset. ## Actual results ``` ConnectionError: Couldn't reach ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed21n0865.xml.gz (InvalidSchema("No connection adapters were found for 'ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed21n0865.xml.gz'")) ``` ## Environment info - `datasets` version: 1.18.2 - Platform: macOS-11.4-x86_64-i386-64bit - Python version: 3.8.2 - PyArrow version: 6.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3655/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3655/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3654
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3654/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3654/comments
https://api.github.com/repos/huggingface/datasets/issues/3654/events
https://github.com/huggingface/datasets/pull/3654
1,119,717,475
PR_kwDODunzps4x2kiX
3,654
Better TQDM output
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2022-01-31T17:22:43
2022-02-03T15:55:34
2022-02-03T15:55:33
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3654", "html_url": "https://github.com/huggingface/datasets/pull/3654", "diff_url": "https://github.com/huggingface/datasets/pull/3654.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3654.patch", "merged_at": "2022-02-03T15:55:33" }
This PR does the following: * if `dataset_infos.json` exists for a dataset, uses `num_examples` to print the total number of examples that needs to be generated (in `builder.py`) * fixes `tqdm` + multiprocessing in Jupyter Notebook/Colab (the issue stems from this commit in the `tqdm` repo: https://github.com/tqdm/tqdm/commit/f7722edecc3010cb35cc1c923ac4850a76336f82) * adds the missing `drop_last_batch` and `with_ranks` params to `DatasetDict.map` * correctly computes the number of iterations in `map` and the CSV/JSON loader when `batched=True` to fix `tqdm` progress bars * removes the `bool(logging.get_verbosity() == logging.NOTSET)` (or simplifies `bool(logging.get_verbosity() == logging.NOTSET) or not utils.is_progress_bar_enabled()` to `not utils.is_progress_bar_enabled()`) condition and uses `utils.is_progress_bar_enabled` to check if `tqdm` output is enabled (this comment from @stas00 explains why the `bool(logging.get_verbosity() == logging.NOTSET)` check is problematic: https://github.com/huggingface/transformers/issues/14889#issue-1087318463) Fix #2630
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3654/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3654/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3653
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3653/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3653/comments
https://api.github.com/repos/huggingface/datasets/issues/3653/events
https://github.com/huggingface/datasets/issues/3653
1,119,186,952
I_kwDODunzps5CtXAI
3,653
`to_json` in multiprocessing fashion sometimes deadlock
{ "login": "thomasw21", "id": 24695242, "node_id": "MDQ6VXNlcjI0Njk1MjQy", "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomasw21", "html_url": "https://github.com/thomasw21", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "repos_url": "https://api.github.com/users/thomasw21/repos", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2022-01-31T09:35:07
2022-01-31T09:35:07
null
MEMBER
null
null
null
## Describe the bug `to_json` in multiprocessing fashion sometimes deadlock, instead of raising exceptions. Temporary solution is to see that it deadlocks, and then reduce the number of processes or batch size in order to reduce the memory footprint. As @lhoestq pointed out, this might be related to https://bugs.python.org/issue22393#msg315684 where `multiprocessing` fails to raise the OOM exception. One suggested alternative is not use `concurrent.futures` instead. ## Steps to reproduce the bug ## Expected results Script fails when one worker hits OOM, and raise appropriate error. ## Actual results Deadlock ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.1 - Platform: Linux - Python version: 3.8 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3653/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3653/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3652
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3652/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3652/comments
https://api.github.com/repos/huggingface/datasets/issues/3652/events
https://github.com/huggingface/datasets/pull/3652
1,118,808,738
PR_kwDODunzps4xzinr
3,652
sp. Columbia => Colombia
{ "login": "serapio", "id": 3781280, "node_id": "MDQ6VXNlcjM3ODEyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/3781280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/serapio", "html_url": "https://github.com/serapio", "followers_url": "https://api.github.com/users/serapio/followers", "following_url": "https://api.github.com/users/serapio/following{/other_user}", "gists_url": "https://api.github.com/users/serapio/gists{/gist_id}", "starred_url": "https://api.github.com/users/serapio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/serapio/subscriptions", "organizations_url": "https://api.github.com/users/serapio/orgs", "repos_url": "https://api.github.com/users/serapio/repos", "events_url": "https://api.github.com/users/serapio/events{/privacy}", "received_events_url": "https://api.github.com/users/serapio/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2022-01-31T00:41:03
2022-02-09T16:55:25
2022-01-31T08:29:07
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3652", "html_url": "https://github.com/huggingface/datasets/pull/3652", "diff_url": "https://github.com/huggingface/datasets/pull/3652.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3652.patch", "merged_at": "2022-01-31T08:29:07" }
"Columbia" is various places in North America. The country is "Colombia".
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3652/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3652/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3651
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3651/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3651/comments
https://api.github.com/repos/huggingface/datasets/issues/3651/events
https://github.com/huggingface/datasets/pull/3651
1,118,597,647
PR_kwDODunzps4xy3De
3,651
Update link in wiki_bio dataset
{ "login": "jxmorris12", "id": 13238952, "node_id": "MDQ6VXNlcjEzMjM4OTUy", "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jxmorris12", "html_url": "https://github.com/jxmorris12", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "repos_url": "https://api.github.com/users/jxmorris12/repos", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2022-01-30T16:28:54
2022-01-31T14:50:48
2022-01-31T08:38:09
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3651", "html_url": "https://github.com/huggingface/datasets/pull/3651", "diff_url": "https://github.com/huggingface/datasets/pull/3651.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3651.patch", "merged_at": "2022-01-31T08:38:09" }
Fixes #3580 and makes the wiki_bio dataset work again. I changed the link and some documentation, and all the tests pass. Thanks @lhoestq for uploading the dataset to the HuggingFace data bucket. @lhoestq -- all the tests pass, but I'm still not able to import the dataset, as the old Google Drive link is cached somewhere: ```python >>> from datasets import load_dataset load_dataset("wiki_bio>>> load_dataset("wiki_bio") Using custom data configuration default Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to /home/jxm3/.cache/huggingface/datasets/wiki_bio/default/1.1.0/5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9... Traceback (most recent call last): ... File "/home/jxm3/random/datasets/src/datasets/utils/file_utils.py", line 612, in get_from_cache raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://drive.google.com/uc?export=download&id=1L7aoUXzHPzyzQ0ns4ApBbYepsjFOtXil ``` what do I have to do to invalidate the cache and actually import the dataset? It's clearly set up correctly, since the data is downloaded and processed by the tests. As an aside, this caching-loading-scripts behavior makes for a really bad developer experience. I just wasted an hour trying to figure out where the caching was happening and how to disable it, and I don't know. All I wanted to do was update the link and submit a pull request! I recommend that you all either change this behavior (i.e. updating the link to a dataset should "just work") or document it, since I couldn't find any information about this in the contributing.md or readme or anywhere else! Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3651/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3651/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3650
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3650/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3650/comments
https://api.github.com/repos/huggingface/datasets/issues/3650/events
https://github.com/huggingface/datasets/pull/3650
1,118,537,429
PR_kwDODunzps4xyr2o
3,650
Allow 'to_json' to run in unordered fashion in order to lower memory footprint
{ "login": "thomasw21", "id": 24695242, "node_id": "MDQ6VXNlcjI0Njk1MjQy", "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomasw21", "html_url": "https://github.com/thomasw21", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "repos_url": "https://api.github.com/users/thomasw21/repos", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
4
2022-01-30T13:23:19
2022-02-01T17:49:21
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3650", "html_url": "https://github.com/huggingface/datasets/pull/3650", "diff_url": "https://github.com/huggingface/datasets/pull/3650.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3650.patch", "merged_at": null }
I'm using `to_json(..., num_proc=num_proc, compressiong='gzip')` with `num_proc>1`. I'm having an issue where things seem to deadlock at some point. Eventually I see OOM. I'm guessing it's an issue where one process starts to take a long time for a specific batch, and so other process keep accumulating their results in memory. In order to flush memory, I propose we use optional `imap_unordered`. This will prevent one process to block the other ones. The logical thinking is that index are rarily relevant, and in one wants to keep an index, one can still create another column and reconstruct from there.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3650/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3650/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3649
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3649/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3649/comments
https://api.github.com/repos/huggingface/datasets/issues/3649/events
https://github.com/huggingface/datasets/issues/3649
1,117,502,250
I_kwDODunzps5Cm7sq
3,649
Add IGLUE dataset
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 3608944167, "node_id": "LA_kwDODunzps7XHB4n", "url": "https://api.github.com/repos/huggingface/datasets/labels/multimodal", "name": "multimodal", "color": "19E633", "default": false, "description": "Multimodal datasets" } ]
open
false
null
[]
null
0
2022-01-28T14:59:41
2022-01-28T15:02:35
null
MEMBER
null
null
null
## Adding a Dataset - **Name:** IGLUE - **Description:** IGLUE brings together 4 vision-and-language tasks across 20 languages (Twitter [thread](https://twitter.com/ebugliarello/status/1487045497583976455?s=20&t=SB4LZGDhhkUW83ugcX_m5w)) - **Paper:** https://arxiv.org/abs/2201.11732 - **Data:** https://github.com/e-bug/iglue - **Motivation:** This dataset would provide a nice example of combining the text and image features of `datasets` together for multimodal applications. Note: the data / code are not yet visible on the GitHub repo, so I've pinged the authors for more information. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3649/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3649/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3648
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3648/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3648/comments
https://api.github.com/repos/huggingface/datasets/issues/3648/events
https://github.com/huggingface/datasets/pull/3648
1,117,465,505
PR_kwDODunzps4xvXig
3,648
Fix Windows CI: bump python to 3.7
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2022-01-28T14:24:54
2022-01-28T14:40:39
2022-01-28T14:40:39
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3648", "html_url": "https://github.com/huggingface/datasets/pull/3648", "diff_url": "https://github.com/huggingface/datasets/pull/3648.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3648.patch", "merged_at": "2022-01-28T14:40:39" }
Python>=3.7 is needed to install `tokenizers` 0.11
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3648/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3648/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3647
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3647/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3647/comments
https://api.github.com/repos/huggingface/datasets/issues/3647/events
https://github.com/huggingface/datasets/pull/3647
1,117,383,675
PR_kwDODunzps4xvGDQ
3,647
Fix `add_column` on datasets with indices mapping
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2022-01-28T13:06:29
2022-01-28T15:35:58
2022-01-28T15:35:58
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3647", "html_url": "https://github.com/huggingface/datasets/pull/3647", "diff_url": "https://github.com/huggingface/datasets/pull/3647.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3647.patch", "merged_at": "2022-01-28T15:35:57" }
My initial idea was to avoid the `flatten_indices` call and reorder a new column instead, but in the end I decided to follow `concatenate_datasets` and use `flatten_indices` to avoid padding when `dataset._indices.num_rows != dataset._data.num_rows`. Fix #3599
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3647/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3647/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3646
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3646/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3646/comments
https://api.github.com/repos/huggingface/datasets/issues/3646/events
https://github.com/huggingface/datasets/pull/3646
1,116,544,627
PR_kwDODunzps4xsX66
3,646
Fix streaming datasets that are not reset correctly
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2022-01-27T17:21:02
2022-01-28T16:34:29
2022-01-28T16:34:28
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3646", "html_url": "https://github.com/huggingface/datasets/pull/3646", "diff_url": "https://github.com/huggingface/datasets/pull/3646.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3646.patch", "merged_at": "2022-01-28T16:34:28" }
Streaming datasets that use `StreamingDownloadManager.iter_archive` and `StreamingDownloadManager.iter_files` had some issues. Indeed if you try to iterate over such dataset twice, then the second time it will be empty. This is because the two methods above are generator functions. I fixed this by making them return iterables that are reset properly instead. Close https://github.com/huggingface/datasets/issues/3645 cc @anton-l
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3646/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3646/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3645
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3645/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3645/comments
https://api.github.com/repos/huggingface/datasets/issues/3645/events
https://github.com/huggingface/datasets/issues/3645
1,116,541,298
I_kwDODunzps5CjRFy
3,645
Streaming dataset based on dl_manager.iter_archive/iter_files are not reset correctly
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
0
2022-01-27T17:17:41
2022-01-28T16:34:28
2022-01-28T16:34:28
MEMBER
null
null
null
Hi ! When iterating over a streaming dataset once, it's not reset correctly because of some issues with `dl_manager.iter_archive` and `dl_manager.iter_files`. Indeed they are generator functions (so the iterator that is returned can be exhausted). They should be iterables instead, and be reset if we do a for loop again: ```python from datasets import load_dataset d = load_dataset("common_voice", "ab", split="test", streaming=True) i = 0 for i, _ in enumerate(d): pass print(i) # 8 # let's do it again i = 0 for i, _ in enumerate(d): pass print(i) # 0 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3645/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3645/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3644
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3644/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3644/comments
https://api.github.com/repos/huggingface/datasets/issues/3644/events
https://github.com/huggingface/datasets/issues/3644
1,116,519,670
I_kwDODunzps5CjLz2
3,644
Add a GROUP BY operator
{ "login": "felix-schneider", "id": 208336, "node_id": "MDQ6VXNlcjIwODMzNg==", "avatar_url": "https://avatars.githubusercontent.com/u/208336?v=4", "gravatar_id": "", "url": "https://api.github.com/users/felix-schneider", "html_url": "https://github.com/felix-schneider", "followers_url": "https://api.github.com/users/felix-schneider/followers", "following_url": "https://api.github.com/users/felix-schneider/following{/other_user}", "gists_url": "https://api.github.com/users/felix-schneider/gists{/gist_id}", "starred_url": "https://api.github.com/users/felix-schneider/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/felix-schneider/subscriptions", "organizations_url": "https://api.github.com/users/felix-schneider/orgs", "repos_url": "https://api.github.com/users/felix-schneider/repos", "events_url": "https://api.github.com/users/felix-schneider/events{/privacy}", "received_events_url": "https://api.github.com/users/felix-schneider/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
2
2022-01-27T16:57:54
2022-02-08T15:06:10
null
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** Using batch mapping, we can easily split examples. However, we lack an appropriate option for merging them back together by some key. Consider this example: ```python # features: # { # "example_id": datasets.Value("int32"), # "text": datasets.Value("string") # } ds = datasets.Dataset() def split(examples): sentences = [text.split(".") for text in examples["text"]] return { "example_id": [ example_id for example_id, sents in zip(examples["example_id"], sentences) for _ in sents ], "sentence": [sent for sents in sentences for sent in sents], "sentence_id": [i for sents in sentences for i in range(len(sents))], } split_ds = ds.map(split, batched=True) def process(examples): outputs = some_neural_network_that_works_on_sentences(examples["sentence"]) return {"outputs": outputs} split_ds = split_ds.map(process, batched=True) ``` I have a dataset consisting of texts that I would like to process sentence by sentence in a batched way. Afterwards, I would like to put it back together as it was, merging the outputs together. **Describe the solution you'd like** Ideally, it would look something like this: ```python def join(examples): order = np.argsort(examples["sentence_id"]) text = ".".join(examples["text"][i] for i in order) outputs = [examples["outputs"][i] for i in order] return {"text": text, "outputs": outputs} ds = split_ds.group_by("example_id", join) ``` **Describe alternatives you've considered** Right now, we can do this: ```python def merge(example): meeting_id = example["example_id"] parts = split_ds.filter(lambda x: x["example_id"] == meeting_id).sort("segment_no") return {"outputs": list(parts["outputs"])} ds = ds.map(merge) ``` Of course, we could process the dataset like this: ```python def process(example): outputs = some_neural_network_that_works_on_sentences(example["text"].split(".")) return {"outputs": outputs} ds = ds.map(process, batched=True) ``` However, that does not allow using an arbitrary batch size and may lead to very inefficient use of resources if the batch size is much larger than the number of sentences in one example. I would very much appreciate some kind of group by operator to merge examples based on the value of one column.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3644/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3644/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3643
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3643/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3643/comments
https://api.github.com/repos/huggingface/datasets/issues/3643/events
https://github.com/huggingface/datasets/pull/3643
1,116,417,428
PR_kwDODunzps4xr8mX
3,643
Fix sem_eval_2018_task_1 download location
{ "login": "maxpel", "id": 31095360, "node_id": "MDQ6VXNlcjMxMDk1MzYw", "avatar_url": "https://avatars.githubusercontent.com/u/31095360?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maxpel", "html_url": "https://github.com/maxpel", "followers_url": "https://api.github.com/users/maxpel/followers", "following_url": "https://api.github.com/users/maxpel/following{/other_user}", "gists_url": "https://api.github.com/users/maxpel/gists{/gist_id}", "starred_url": "https://api.github.com/users/maxpel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maxpel/subscriptions", "organizations_url": "https://api.github.com/users/maxpel/orgs", "repos_url": "https://api.github.com/users/maxpel/repos", "events_url": "https://api.github.com/users/maxpel/events{/privacy}", "received_events_url": "https://api.github.com/users/maxpel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2022-01-27T15:45:00
2022-02-04T15:15:26
2022-02-04T15:15:26
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3643", "html_url": "https://github.com/huggingface/datasets/pull/3643", "diff_url": "https://github.com/huggingface/datasets/pull/3643.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3643.patch", "merged_at": "2022-02-04T15:15:26" }
As discussed with @lhoestq in https://github.com/huggingface/datasets/issues/3549#issuecomment-1020176931_ this is the new pull request to fix the download location.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3643/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3643/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3642
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3642/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3642/comments
https://api.github.com/repos/huggingface/datasets/issues/3642/events
https://github.com/huggingface/datasets/pull/3642
1,116,306,986
PR_kwDODunzps4xrj2S
3,642
Fix dataset slicing with negative bounds when indices mapping is not `None`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2022-01-27T14:45:53
2022-01-27T18:16:23
2022-01-27T18:16:22
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3642", "html_url": "https://github.com/huggingface/datasets/pull/3642", "diff_url": "https://github.com/huggingface/datasets/pull/3642.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3642.patch", "merged_at": "2022-01-27T18:16:22" }
Fix #3611
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3642/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3642/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3641
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3641/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3641/comments
https://api.github.com/repos/huggingface/datasets/issues/3641/events
https://github.com/huggingface/datasets/pull/3641
1,116,284,268
PR_kwDODunzps4xre7C
3,641
Fix numpy rngs when seed is None
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2022-01-27T14:29:09
2022-01-27T18:16:08
2022-01-27T18:16:07
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3641", "html_url": "https://github.com/huggingface/datasets/pull/3641", "diff_url": "https://github.com/huggingface/datasets/pull/3641.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3641.patch", "merged_at": "2022-01-27T18:16:07" }
Fixes the NumPy RNG when `seed` is `None`. The problem becomes obvious after reading the NumPy notes on RNG (returned by `np.random.get_state()`): > The MT19937 state vector consists of a 624-element array of 32-bit unsigned integers plus a single integer value between 0 and 624 that indexes the current position within the main array. `The MT19937 state vector`: the seed which we currently index, but this value stays the same for multiple rounds. `plus a single integer value`: the `pos` value in this PR (is 624 if `seed` is set to a fixed value with `np.random.seed`, so we take the first value in the `seed` array returned by `np.random.get_state()`: https://stackoverflow.com/questions/32172054/how-can-i-retrieve-the-current-seed-of-numpys-random-number-generator) NumPy notes: https://numpy.org/doc/stable/reference/random/bit_generators/mt19937.html Fix #3634
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3641/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3641/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3640
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3640/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3640/comments
https://api.github.com/repos/huggingface/datasets/issues/3640/events
https://github.com/huggingface/datasets/issues/3640
1,116,133,769
I_kwDODunzps5ChtmJ
3,640
Issues with custom dataset in Wav2Vec2
{ "login": "peregilk", "id": 9079808, "node_id": "MDQ6VXNlcjkwNzk4MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/9079808?v=4", "gravatar_id": "", "url": "https://api.github.com/users/peregilk", "html_url": "https://github.com/peregilk", "followers_url": "https://api.github.com/users/peregilk/followers", "following_url": "https://api.github.com/users/peregilk/following{/other_user}", "gists_url": "https://api.github.com/users/peregilk/gists{/gist_id}", "starred_url": "https://api.github.com/users/peregilk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/peregilk/subscriptions", "organizations_url": "https://api.github.com/users/peregilk/orgs", "repos_url": "https://api.github.com/users/peregilk/repos", "events_url": "https://api.github.com/users/peregilk/events{/privacy}", "received_events_url": "https://api.github.com/users/peregilk/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2022-01-27T12:09:05
2022-01-27T12:29:48
2022-01-27T12:29:48
NONE
null
null
null
We are training Vav2Vec using the run_speech_recognition_ctc_bnb.py-script. This is working fine with Common Voice, however using our custom dataset and data loader at [NbAiLab/NPSC]( https://huggingface.co/datasets/NbAiLab/NPSC) it crashes after roughly 1 epoch with the following stack trace: ![image](https://user-images.githubusercontent.com/9079808/151355893-6d5887cc-ca19-4b12-948a-124eb6dac372.png) We are able to work around the issue, for instance by adding this check in line#222 in transformers/models/wav2vec2/modeling_wav2vec2.py: ```python if input_length - (mask_length - 1) < num_masked_span: num_masked_span = input_length - (mask_length - 1) ``` Interestingly, these are the variable values before the adjustment: ``` input_length=10 mask_length=10 num_masked_span=2 ```` After adjusting num_masked_spin to 1, the training script runs. The issue is also fixed by setting “replace=True” in the same function. Do you have any idea what is causing this, and how to fix this error permanently? If you do not think this is an Datasets issue, feel free to move the issue.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3640/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3640/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3639
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3639/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3639/comments
https://api.github.com/repos/huggingface/datasets/issues/3639/events
https://github.com/huggingface/datasets/issues/3639
1,116,021,420
I_kwDODunzps5ChSKs
3,639
same value of precision, recall, f1 score at each epoch for classification task.
{ "login": "Dhanachandra", "id": 10828657, "node_id": "MDQ6VXNlcjEwODI4NjU3", "avatar_url": "https://avatars.githubusercontent.com/u/10828657?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dhanachandra", "html_url": "https://github.com/Dhanachandra", "followers_url": "https://api.github.com/users/Dhanachandra/followers", "following_url": "https://api.github.com/users/Dhanachandra/following{/other_user}", "gists_url": "https://api.github.com/users/Dhanachandra/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dhanachandra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dhanachandra/subscriptions", "organizations_url": "https://api.github.com/users/Dhanachandra/orgs", "repos_url": "https://api.github.com/users/Dhanachandra/repos", "events_url": "https://api.github.com/users/Dhanachandra/events{/privacy}", "received_events_url": "https://api.github.com/users/Dhanachandra/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
2022-01-27T10:14:16
2022-02-09T16:11:49
null
NONE
null
null
null
**1st Epoch:** 1/27/2022 09:30:48 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/f1/default/default_experiment-1-0.arrow.59it/s] 01/27/2022 09:30:48 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/precision/default/default_experiment-1-0.arrow 01/27/2022 09:30:49 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/recall/default/default_experiment-1-0.arrow PRECISION: {'precision': 0.7612903225806451} RECALL: {'recall': 0.7612903225806451} F1: {'f1': 0.7612903225806451} {'eval_loss': 1.4658324718475342, 'eval_accuracy': 0.7612903118133545, 'eval_runtime': 30.0054, 'eval_samples_per_second': 46.492, 'eval_steps_per_second': 46.492, 'epoch': 3.0} **4th Epoch:** 1/27/2022 09:56:55 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/f1/default/default_experiment-1-0.arrow.92it/s] 01/27/2022 09:56:56 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/precision/default/default_experiment-1-0.arrow 01/27/2022 09:56:56 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/recall/default/default_experiment-1-0.arrow PRECISION: {'precision': 0.7698924731182796} RECALL: {'recall': 0.7698924731182796} F1: {'f1': 0.7698924731182796} ## Environment info !git clone https://github.com/huggingface/transformers %cd transformers !pip install . !pip install -r /content/transformers/examples/pytorch/token-classification/requirements.txt !pip install datasets
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3639/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3639/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3638
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3638/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3638/comments
https://api.github.com/repos/huggingface/datasets/issues/3638/events
https://github.com/huggingface/datasets/issues/3638
1,115,725,703
I_kwDODunzps5CgJ-H
3,638
AutoTokenizer hash value got change after datasets.map
{ "login": "tshu-w", "id": 13161779, "node_id": "MDQ6VXNlcjEzMTYxNzc5", "avatar_url": "https://avatars.githubusercontent.com/u/13161779?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tshu-w", "html_url": "https://github.com/tshu-w", "followers_url": "https://api.github.com/users/tshu-w/followers", "following_url": "https://api.github.com/users/tshu-w/following{/other_user}", "gists_url": "https://api.github.com/users/tshu-w/gists{/gist_id}", "starred_url": "https://api.github.com/users/tshu-w/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tshu-w/subscriptions", "organizations_url": "https://api.github.com/users/tshu-w/orgs", "repos_url": "https://api.github.com/users/tshu-w/repos", "events_url": "https://api.github.com/users/tshu-w/events{/privacy}", "received_events_url": "https://api.github.com/users/tshu-w/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
9
2022-01-27T03:19:03
2022-01-28T03:20:38
null
NONE
null
null
null
## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import load_dataset from datasets.fingerprint import Hasher tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') def tokenize_function(example): return tokenizer(example["sentence1"], example["sentence2"], truncation=True) raw_datasets = load_dataset("glue", "mrpc") print(Hasher.hash(tokenize_function)) print(Hasher.hash(tokenizer)) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) print(Hasher.hash(tokenize_function)) print(Hasher.hash(tokenizer)) ``` got ``` Reusing dataset glue (/home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1112.35it/s] f4976bb4694ebc51 3fca35a1fd4a1251 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 6.96ba/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 15.25ba/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 5.81ba/s] d32837619b7d7d01 5fd925c82edd62b6 ``` 3. run raw_datasets.map(tokenize_function, batched=True) again and see some dataset are not using cache. ## Expected results `AutoTokenizer` work like specific Tokenizer (The hash value don't change after map): ```python from transformers import AutoTokenizer, BertTokenizer from datasets import load_dataset from datasets.fingerprint import Hasher tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') def tokenize_function(example): return tokenizer(example["sentence1"], example["sentence2"], truncation=True) raw_datasets = load_dataset("glue", "mrpc") print(Hasher.hash(tokenize_function)) print(Hasher.hash(tokenizer)) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) print(Hasher.hash(tokenize_function)) print(Hasher.hash(tokenizer)) ``` ``` Reusing dataset glue (/home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1091.22it/s] 46d4b31f54153fc7 5b8771afd8d43888 Loading cached processed dataset at /home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6b07ff82ae9d5c51.arrow Loading cached processed dataset at /home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-af738a6d84f3864b.arrow Loading cached processed dataset at /home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-531d2a603ba713c1.arrow 46d4b31f54153fc7 5b8771afd8d43888 ``` ## Environment info - `datasets` version: 1.18.0 - Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.27 - Python version: 3.9.7 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3638/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3638/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3637
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3637/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3637/comments
https://api.github.com/repos/huggingface/datasets/issues/3637/events
https://github.com/huggingface/datasets/issues/3637
1,115,526,438
I_kwDODunzps5CfZUm
3,637
[TypeError: Couldn't cast array of type] Cannot load dataset in v1.18
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2022-01-26T21:38:02
2022-02-09T16:15:53
2022-02-09T16:15:53
MEMBER
null
null
null
## Describe the bug I am trying to load the [`GEM/RiSAWOZ` dataset](https://huggingface.co/datasets/GEM/RiSAWOZ) in `datasets` v1.18.1 and am running into a type error when casting the features. The strange thing is that I can load the dataset with v1.17.0. Note that the error is also present if I install from `master` too. As far as I can tell, the dataset loading script is correct and the problematic features [here](https://huggingface.co/datasets/GEM/RiSAWOZ/blob/main/RiSAWOZ.py#L237) also look fine to me. ## Steps to reproduce the bug ```python from datasets import load_dataset dset = load_dataset("GEM/RiSAWOZ") ``` ## Expected results I can load the dataset without error. ## Actual results <details><summary>Traceback</summary> ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/builder.py in _prepare_split(self, split_generator) 1083 example = self.info.features.encode_example(record) -> 1084 writer.write(example, key) 1085 finally: ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write(self, example, key, writer_batch_size) 445 --> 446 self.write_examples_on_file() 447 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write_examples_on_file(self) 403 batch_examples[col] = [row[0][col] for row in self.current_examples] --> 404 self.write_batch(batch_examples=batch_examples) 405 self.current_examples = [] ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size) 496 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col) --> 497 arrays.append(pa.array(typed_sequence)) 498 inferred_features[col] = typed_sequence.get_inferred_type() ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib.array() ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol() ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type) 204 # We only do it if trying_type is False - since this is what the user asks for. --> 205 out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) 206 return out ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 919 else: --> 920 return func(array, *args, **kwargs) 921 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1064 if isinstance(feature, list): -> 1065 return pa.ListArray.from_arrays(array.offsets, _c(array.values, feature[0])) 1066 elif isinstance(feature, Sequence): ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 919 else: --> 920 return func(array, *args, **kwargs) 921 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0) 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 919 else: --> 920 return func(array, *args, **kwargs) 921 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0) 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 919 else: --> 920 return func(array, *args, **kwargs) 921 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1086 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) -> 1087 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") 1088 TypeError: Couldn't cast array of type struct<医院-3.0T MRI: string, 医院-CT: string, 医院-DSA: string, 医院-公交线路: string, 医院-区域: string, 医院-名称: string, 医院-地址: string, 医院-地铁可达: string, 医院-地铁线路: string, 医院-性质: string, 医院-挂号时间: string, 医院-电话: string, 医院-等级: string, 医院-类别: string, 医院-重点科室: string, 医院-门诊时间: string, 天气-城市: string, 天气-天气: string, 天气-日期: string, 天气-温度: string, 天气-紫外线强度: string, 天气-风力风向: string, 旅游景点-区域: string, 旅游景点-名称: string, 旅游景点-地址: string, 旅游景点-开放时间: string, 旅游景点-是否地铁直达: string, 旅游景点-景点类型: string, 旅游景点-最适合人群: string, 旅游景点-消费: string, 旅游景点-特点: string, 旅游景点-电话号码: string, 旅游景点-评分: string, 旅游景点-门票价格: string, 汽车-价格(万元): string, 汽车-倒车影像: string, 汽车-动力水平: string, 汽车-厂商: string, 汽车-发动机排量(L): string, 汽车-发动机马力(Ps): string, 汽车-名称: string, 汽车-定速巡航: string, 汽车-巡航系统: string, 汽车-座位数: string, 汽车-座椅加热: string, 汽车-座椅通风: string, 汽车-所属价格区间: string, 汽车-油耗水平: string, 汽车-环保标准: string, 汽车-级别: string, 汽车-综合油耗(L/100km): string, 汽车-能源类型: string, 汽车-车型: string, 汽车-车系: string, 汽车-车身尺寸(mm): string, 汽车-驱动方式: string, 汽车-驾驶辅助影像: string, 火车-出发地: string, 火车-出发时间: string, 火车-到达时间: string, 火车-坐席: string, 火车-日期: string, 火车-时长: string, 火车-目的地: string, 火车-票价: string, 火车-舱位档次: string, 火车-车型: string, 火车-车次信息: string, 电影-主演: string, 电影-主演名单: string, 电影-具体上映时间: string, 电影-制片国家/地区: string, 电影-导演: string, 电影-年代: string, 电影-片名: string, 电影-片长: string, 电影-类型: string, 电影-豆瓣评分: string, 电脑-CPU: string, 电脑-CPU型号: string, 电脑-产品类别: string, 电脑-价格: string, 电脑-价格区间: string, 电脑-内存容量: string, 电脑-分类: string, 电脑-品牌: string, 电脑-商品名称: string, 电脑-屏幕尺寸: string, 电脑-待机时长: string, 电脑-显卡型号: string, 电脑-显卡类别: string, 电脑-游戏性能: string, 电脑-特性: string, 电脑-硬盘容量: string, 电脑-系列: string, 电脑-系统: string, 电脑-色系: string, 电脑-裸机重量: string, 电视剧-主演: string, 电视剧-主演名单: string, 电视剧-制片国家/地区: string, 电视剧-单集片长: string, 电视剧-导演: string, 电视剧-年代: string, 电视剧-片名: string, 电视剧-类型: string, 电视剧-豆瓣评分: string, 电视剧-集数: string, 电视剧-首播时间: string, 辅导班-上课方式: string, 辅导班-上课时间: string, 辅导班-下课时间: string, 辅导班-价格: string, 辅导班-区域: string, 辅导班-年级: string, 辅导班-开始日期: string, 辅导班-教室地点: string, 辅导班-教师: string, 辅导班-教师网址: string, 辅导班-时段: string, 辅导班-校区: string, 辅导班-每周: string, 辅导班-班号: string, 辅导班-科目: string, 辅导班-结束日期: string, 辅导班-课时: string, 辅导班-课次: string, 辅导班-课程网址: string, 辅导班-难度: string, 通用-产品类别: string, 通用-价格区间: string, 通用-品牌: string, 通用-系列: string, 酒店-价位: string, 酒店-停车场: string, 酒店-区域: string, 酒店-名称: string, 酒店-地址: string, 酒店-房型: string, 酒店-房费: string, 酒店-星级: string, 酒店-电话号码: string, 酒店-评分: string, 酒店-酒店类型: string, 飞机-准点率: string, 飞机-出发地: string, 飞机-到达时间: string, 飞机-日期: string, 飞机-目的地: string, 飞机-票价: string, 飞机-航班信息: string, 飞机-舱位档次: string, 飞机-起飞时间: string, 餐厅-人均消费: string, 餐厅-价位: string, 餐厅-区域: string, 餐厅-名称: string, 餐厅-地址: string, 餐厅-推荐菜: string, 餐厅-是否地铁直达: string, 餐厅-电话号码: string, 餐厅-菜系: string, 餐厅-营业时间: string, 餐厅-评分: string> to {'旅游景点-名称': Value(dtype='string', id=None), '旅游景点-区域': Value(dtype='string', id=None), '旅游景点-景点类型': Value(dtype='string', id=None), '旅游景点-最适合人群': Value(dtype='string', id=None), '旅游景点-消费': Value(dtype='string', id=None), '旅游景点-是否地铁直达': Value(dtype='string', id=None), '旅游景点-门票价格': Value(dtype='string', id=None), '旅游景点-电话号码': Value(dtype='string', id=None), '旅游景点-地址': Value(dtype='string', id=None), '旅游景点-评分': Value(dtype='string', id=None), '旅游景点-开放时间': Value(dtype='string', id=None), '旅游景点-特点': Value(dtype='string', id=None), '餐厅-名称': Value(dtype='string', id=None), '餐厅-区域': Value(dtype='string', id=None), '餐厅-菜系': Value(dtype='string', id=None), '餐厅-价位': Value(dtype='string', id=None), '餐厅-是否地铁直达': Value(dtype='string', id=None), '餐厅-人均消费': Value(dtype='string', id=None), '餐厅-地址': Value(dtype='string', id=None), '餐厅-电话号码': Value(dtype='string', id=None), '餐厅-评分': Value(dtype='string', id=None), '餐厅-营业时间': Value(dtype='string', id=None), '餐厅-推荐菜': Value(dtype='string', id=None), '酒店-名称': Value(dtype='string', id=None), '酒店-区域': Value(dtype='string', id=None), '酒店-星级': Value(dtype='string', id=None), '酒店-价位': Value(dtype='string', id=None), '酒店-酒店类型': Value(dtype='string', id=None), '酒店-房型': Value(dtype='string', id=None), '酒店-停车场': Value(dtype='string', id=None), '酒店-房费': Value(dtype='string', id=None), '酒店-地址': Value(dtype='string', id=None), '酒店-电话号码': Value(dtype='string', id=None), '酒店-评分': Value(dtype='string', id=None), '电脑-品牌': Value(dtype='string', id=None), '电脑-产品类别': Value(dtype='string', id=None), '电脑-分类': Value(dtype='string', id=None), '电脑-内存容量': Value(dtype='string', id=None), '电脑-屏幕尺寸': Value(dtype='string', id=None), '电脑-CPU': Value(dtype='string', id=None), '电脑-价格区间': Value(dtype='string', id=None), '电脑-系列': Value(dtype='string', id=None), '电脑-商品名称': Value(dtype='string', id=None), '电脑-系统': Value(dtype='string', id=None), '电脑-游戏性能': Value(dtype='string', id=None), '电脑-CPU型号': Value(dtype='string', id=None), '电脑-裸机重量': Value(dtype='string', id=None), '电脑-显卡类别': Value(dtype='string', id=None), '电脑-显卡型号': Value(dtype='string', id=None), '电脑-特性': Value(dtype='string', id=None), '电脑-色系': Value(dtype='string', id=None), '电脑-待机时长': Value(dtype='string', id=None), '电脑-硬盘容量': Value(dtype='string', id=None), '电脑-价格': Value(dtype='string', id=None), '火车-出发地': Value(dtype='string', id=None), '火车-目的地': Value(dtype='string', id=None), '火车-日期': Value(dtype='string', id=None), '火车-车型': Value(dtype='string', id=None), '火车-坐席': Value(dtype='string', id=None), '火车-车次信息': Value(dtype='string', id=None), '火车-时长': Value(dtype='string', id=None), '火车-出发时间': Value(dtype='string', id=None), '火车-到达时间': Value(dtype='string', id=None), '火车-票价': Value(dtype='string', id=None), '飞机-出发地': Value(dtype='string', id=None), '飞机-目的地': Value(dtype='string', id=None), '飞机-日期': Value(dtype='string', id=None), '飞机-舱位档次': Value(dtype='string', id=None), '飞机-航班信息': Value(dtype='string', id=None), '飞机-起飞时间': Value(dtype='string', id=None), '飞机-到达时间': Value(dtype='string', id=None), '飞机-票价': Value(dtype='string', id=None), '飞机-准点率': Value(dtype='string', id=None), '天气-城市': Value(dtype='string', id=None), '天气-日期': Value(dtype='string', id=None), '天气-天气': Value(dtype='string', id=None), '天气-温度': Value(dtype='string', id=None), '天气-风力风向': Value(dtype='string', id=None), '天气-紫外线强度': Value(dtype='string', id=None), '电影-制片国家/地区': Value(dtype='string', id=None), '电影-类型': Value(dtype='string', id=None), '电影-年代': Value(dtype='string', id=None), '电影-主演': Value(dtype='string', id=None), '电影-导演': Value(dtype='string', id=None), '电影-片名': Value(dtype='string', id=None), '电影-主演名单': Value(dtype='string', id=None), '电影-具体上映时间': Value(dtype='string', id=None), '电影-片长': Value(dtype='string', id=None), '电影-豆瓣评分': Value(dtype='string', id=None), '电视剧-制片国家/地区': Value(dtype='string', id=None), '电视剧-类型': Value(dtype='string', id=None), '电视剧-年代': Value(dtype='string', id=None), '电视剧-主演': Value(dtype='string', id=None), '电视剧-导演': Value(dtype='string', id=None), '电视剧-片名': Value(dtype='string', id=None), '电视剧-主演名单': Value(dtype='string', id=None), '电视剧-首播时间': Value(dtype='string', id=None), '电视剧-集数': Value(dtype='string', id=None), '电视剧-单集片长': Value(dtype='string', id=None), '电视剧-豆瓣评分': Value(dtype='string', id=None), '辅导班-班号': Value(dtype='string', id=None), '辅导班-难度': Value(dtype='string', id=None), '辅导班-科目': Value(dtype='string', id=None), '辅导班-年级': Value(dtype='string', id=None), '辅导班-区域': Value(dtype='string', id=None), '辅导班-校区': Value(dtype='string', id=None), '辅导班-上课方式': Value(dtype='string', id=None), '辅导班-开始日期': Value(dtype='string', id=None), '辅导班-结束日期': Value(dtype='string', id=None), '辅导班-每周': Value(dtype='string', id=None), '辅导班-上课时间': Value(dtype='string', id=None), '辅导班-下课时间': Value(dtype='string', id=None), '辅导班-时段': Value(dtype='string', id=None), '辅导班-课次': Value(dtype='string', id=None), '辅导班-课时': Value(dtype='string', id=None), '辅导班-教室地点': Value(dtype='string', id=None), '辅导班-教师': Value(dtype='string', id=None), '辅导班-价格': Value(dtype='string', id=None), '辅导班-课程网址': Value(dtype='string', id=None), '辅导班-教师网址': Value(dtype='string', id=None), '汽车-名称': Value(dtype='string', id=None), '汽车-车型': Value(dtype='string', id=None), '汽车-级别': Value(dtype='string', id=None), '汽车-座位数': Value(dtype='string', id=None), '汽车-车身尺寸(mm)': Value(dtype='string', id=None), '汽车-厂商': Value(dtype='string', id=None), '汽车-能源类型': Value(dtype='string', id=None), '汽车-发动机排量(L)': Value(dtype='string', id=None), '汽车-发动机马力(Ps)': Value(dtype='string', id=None), '汽车-驱动方式': Value(dtype='string', id=None), '汽车-综合油耗(L/100km)': Value(dtype='string', id=None), '汽车-环保标准': Value(dtype='string', id=None), '汽车-驾驶辅助影像': Value(dtype='string', id=None), '汽车-巡航系统': Value(dtype='string', id=None), '汽车-价格(万元)': Value(dtype='string', id=None), '汽车-车系': Value(dtype='string', id=None), '汽车-动力水平': Value(dtype='string', id=None), '汽车-油耗水平': Value(dtype='string', id=None), '汽车-倒车影像': Value(dtype='string', id=None), '汽车-定速巡航': Value(dtype='string', id=None), '汽车-座椅加热': Value(dtype='string', id=None), '汽车-座椅通风': Value(dtype='string', id=None), '汽车-所属价格区间': Value(dtype='string', id=None), '医院-名称': Value(dtype='string', id=None), '医院-等级': Value(dtype='string', id=None), '医院-类别': Value(dtype='string', id=None), '医院-性质': Value(dtype='string', id=None), '医院-区域': Value(dtype='string', id=None), '医院-地址': Value(dtype='string', id=None), '医院-电话': Value(dtype='string', id=None), '医院-挂号时间': Value(dtype='string', id=None), '医院-门诊时间': Value(dtype='string', id=None), '医院-公交线路': Value(dtype='string', id=None), '医院-地铁可达': Value(dtype='string', id=None), '医院-地铁线路': Value(dtype='string', id=None), '医院-重点科室': Value(dtype='string', id=None), '医院-CT': Value(dtype='string', id=None), '医院-3.0T MRI': Value(dtype='string', id=None), '医院-DSA': Value(dtype='string', id=None)} During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) /var/folders/28/k4cy5q7s2hs92xq7_h89_vgm0000gn/T/ipykernel_44306/2896005239.py in <module> ----> 1 dset = load_dataset("GEM/RiSAWOZ") 2 dset ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs) 1692 1693 # Download and prepare data -> 1694 builder_instance.download_and_prepare( 1695 download_config=download_config, 1696 download_mode=download_mode, ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 593 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 594 if not downloaded_from_gcs: --> 595 self._download_and_prepare( 596 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 597 ) ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 682 try: 683 # Prepare split will record examples associated to the split --> 684 self._prepare_split(split_generator, **prepare_split_kwargs) 685 except OSError as e: 686 raise OSError( ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/builder.py in _prepare_split(self, split_generator) 1084 writer.write(example, key) 1085 finally: -> 1086 num_examples, num_bytes = writer.finalize() 1087 1088 split_generator.split_info.num_examples = num_examples ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in finalize(self, close_stream) 525 # Re-intializing to empty list for next batch 526 self.hkey_record = [] --> 527 self.write_examples_on_file() 528 if self.pa_writer is None: 529 if self.schema: ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write_examples_on_file(self) 402 # Since current_examples contains (example, key) tuples 403 batch_examples[col] = [row[0][col] for row in self.current_examples] --> 404 self.write_batch(batch_examples=batch_examples) 405 self.current_examples = [] 406 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size) 495 col_try_type = try_features[col] if try_features is not None and col in try_features else None 496 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col) --> 497 arrays.append(pa.array(typed_sequence)) 498 inferred_features[col] = typed_sequence.get_inferred_type() 499 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib.array() ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol() ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type) 203 # Also, when trying type "string", we don't want to convert integers or floats to "string". 204 # We only do it if trying_type is False - since this is what the user asks for. --> 205 out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) 206 return out 207 except (TypeError, pa.lib.ArrowInvalid) as e: # handle type errors and overflows ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 942 if pa.types.is_list(array.type) and config.PYARROW_VERSION < version.parse("4.0.0"): 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 946 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 918 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 919 else: --> 920 return func(array, *args, **kwargs) 921 922 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1063 # feature must be either [subfeature] or Sequence(subfeature) 1064 if isinstance(feature, list): -> 1065 return pa.ListArray.from_arrays(array.offsets, _c(array.values, feature[0])) 1066 elif isinstance(feature, Sequence): 1067 if feature.length > -1: ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 942 if pa.types.is_list(array.type) and config.PYARROW_VERSION < version.parse("4.0.0"): 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 946 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 918 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 919 else: --> 920 return func(array, *args, **kwargs) 921 922 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1058 } 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) 1062 elif pa.types.is_list(array.type): ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0) 1058 } 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) 1062 elif pa.types.is_list(array.type): ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 942 if pa.types.is_list(array.type) and config.PYARROW_VERSION < version.parse("4.0.0"): 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 946 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 918 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 919 else: --> 920 return func(array, *args, **kwargs) 921 922 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1058 } 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) 1062 elif pa.types.is_list(array.type): ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0) 1058 } 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) 1062 elif pa.types.is_list(array.type): ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 942 if pa.types.is_list(array.type) and config.PYARROW_VERSION < version.parse("4.0.0"): 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 946 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 918 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 919 else: --> 920 return func(array, *args, **kwargs) 921 922 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1085 elif not isinstance(feature, (Sequence, dict, list, tuple)): 1086 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) -> 1087 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") 1088 1089 TypeError: Couldn't cast array of type struct<医院-3.0T MRI: string, 医院-CT: string, 医院-DSA: string, 医院-公交线路: string, 医院-区域: string, 医院-名称: string, 医院-地址: string, 医院-地铁可达: string, 医院-地铁线路: string, 医院-性质: string, 医院-挂号时间: string, 医院-电话: string, 医院-等级: string, 医院-类别: string, 医院-重点科室: string, 医院-门诊时间: string, 天气-城市: string, 天气-天气: string, 天气-日期: string, 天气-温度: string, 天气-紫外线强度: string, 天气-风力风向: string, 旅游景点-区域: string, 旅游景点-名称: string, 旅游景点-地址: string, 旅游景点-开放时间: string, 旅游景点-是否地铁直达: string, 旅游景点-景点类型: string, 旅游景点-最适合人群: string, 旅游景点-消费: string, 旅游景点-特点: string, 旅游景点-电话号码: string, 旅游景点-评分: string, 旅游景点-门票价格: string, 汽车-价格(万元): string, 汽车-倒车影像: string, 汽车-动力水平: string, 汽车-厂商: string, 汽车-发动机排量(L): string, 汽车-发动机马力(Ps): string, 汽车-名称: string, 汽车-定速巡航: string, 汽车-巡航系统: string, 汽车-座位数: string, 汽车-座椅加热: string, 汽车-座椅通风: string, 汽车-所属价格区间: string, 汽车-油耗水平: string, 汽车-环保标准: string, 汽车-级别: string, 汽车-综合油耗(L/100km): string, 汽车-能源类型: string, 汽车-车型: string, 汽车-车系: string, 汽车-车身尺寸(mm): string, 汽车-驱动方式: string, 汽车-驾驶辅助影像: string, 火车-出发地: string, 火车-出发时间: string, 火车-到达时间: string, 火车-坐席: string, 火车-日期: string, 火车-时长: string, 火车-目的地: string, 火车-票价: string, 火车-舱位档次: string, 火车-车型: string, 火车-车次信息: string, 电影-主演: string, 电影-主演名单: string, 电影-具体上映时间: string, 电影-制片国家/地区: string, 电影-导演: string, 电影-年代: string, 电影-片名: string, 电影-片长: string, 电影-类型: string, 电影-豆瓣评分: string, 电脑-CPU: string, 电脑-CPU型号: string, 电脑-产品类别: string, 电脑-价格: string, 电脑-价格区间: string, 电脑-内存容量: string, 电脑-分类: string, 电脑-品牌: string, 电脑-商品名称: string, 电脑-屏幕尺寸: string, 电脑-待机时长: string, 电脑-显卡型号: string, 电脑-显卡类别: string, 电脑-游戏性能: string, 电脑-特性: string, 电脑-硬盘容量: string, 电脑-系列: string, 电脑-系统: string, 电脑-色系: string, 电脑-裸机重量: string, 电视剧-主演: string, 电视剧-主演名单: string, 电视剧-制片国家/地区: string, 电视剧-单集片长: string, 电视剧-导演: string, 电视剧-年代: string, 电视剧-片名: string, 电视剧-类型: string, 电视剧-豆瓣评分: string, 电视剧-集数: string, 电视剧-首播时间: string, 辅导班-上课方式: string, 辅导班-上课时间: string, 辅导班-下课时间: string, 辅导班-价格: string, 辅导班-区域: string, 辅导班-年级: string, 辅导班-开始日期: string, 辅导班-教室地点: string, 辅导班-教师: string, 辅导班-教师网址: string, 辅导班-时段: string, 辅导班-校区: string, 辅导班-每周: string, 辅导班-班号: string, 辅导班-科目: string, 辅导班-结束日期: string, 辅导班-课时: string, 辅导班-课次: string, 辅导班-课程网址: string, 辅导班-难度: string, 通用-产品类别: string, 通用-价格区间: string, 通用-品牌: string, 通用-系列: string, 酒店-价位: string, 酒店-停车场: string, 酒店-区域: string, 酒店-名称: string, 酒店-地址: string, 酒店-房型: string, 酒店-房费: string, 酒店-星级: string, 酒店-电话号码: string, 酒店-评分: string, 酒店-酒店类型: string, 飞机-准点率: string, 飞机-出发地: string, 飞机-到达时间: string, 飞机-日期: string, 飞机-目的地: string, 飞机-票价: string, 飞机-航班信息: string, 飞机-舱位档次: string, 飞机-起飞时间: string, 餐厅-人均消费: string, 餐厅-价位: string, 餐厅-区域: string, 餐厅-名称: string, 餐厅-地址: string, 餐厅-推荐菜: string, 餐厅-是否地铁直达: string, 餐厅-电话号码: string, 餐厅-菜系: string, 餐厅-营业时间: string, 餐厅-评分: string> to {'旅游景点-名称': Value(dtype='string', id=None), '旅游景点-区域': Value(dtype='string', id=None), '旅游景点-景点类型': Value(dtype='string', id=None), '旅游景点-最适合人群': Value(dtype='string', id=None), '旅游景点-消费': Value(dtype='string', id=None), '旅游景点-是否地铁直达': Value(dtype='string', id=None), '旅游景点-门票价格': Value(dtype='string', id=None), '旅游景点-电话号码': Value(dtype='string', id=None), '旅游景点-地址': Value(dtype='string', id=None), '旅游景点-评分': Value(dtype='string', id=None), '旅游景点-开放时间': Value(dtype='string', id=None), '旅游景点-特点': Value(dtype='string', id=None), '餐厅-名称': Value(dtype='string', id=None), '餐厅-区域': Value(dtype='string', id=None), '餐厅-菜系': Value(dtype='string', id=None), '餐厅-价位': Value(dtype='string', id=None), '餐厅-是否地铁直达': Value(dtype='string', id=None), '餐厅-人均消费': Value(dtype='string', id=None), '餐厅-地址': Value(dtype='string', id=None), '餐厅-电话号码': Value(dtype='string', id=None), '餐厅-评分': Value(dtype='string', id=None), '餐厅-营业时间': Value(dtype='string', id=None), '餐厅-推荐菜': Value(dtype='string', id=None), '酒店-名称': Value(dtype='string', id=None), '酒店-区域': Value(dtype='string', id=None), '酒店-星级': Value(dtype='string', id=None), '酒店-价位': Value(dtype='string', id=None), '酒店-酒店类型': Value(dtype='string', id=None), '酒店-房型': Value(dtype='string', id=None), '酒店-停车场': Value(dtype='string', id=None), '酒店-房费': Value(dtype='string', id=None), '酒店-地址': Value(dtype='string', id=None), '酒店-电话号码': Value(dtype='string', id=None), '酒店-评分': Value(dtype='string', id=None), '电脑-品牌': Value(dtype='string', id=None), '电脑-产品类别': Value(dtype='string', id=None), '电脑-分类': Value(dtype='string', id=None), '电脑-内存容量': Value(dtype='string', id=None), '电脑-屏幕尺寸': Value(dtype='string', id=None), '电脑-CPU': Value(dtype='string', id=None), '电脑-价格区间': Value(dtype='string', id=None), '电脑-系列': Value(dtype='string', id=None), '电脑-商品名称': Value(dtype='string', id=None), '电脑-系统': Value(dtype='string', id=None), '电脑-游戏性能': Value(dtype='string', id=None), '电脑-CPU型号': Value(dtype='string', id=None), '电脑-裸机重量': Value(dtype='string', id=None), '电脑-显卡类别': Value(dtype='string', id=None), '电脑-显卡型号': Value(dtype='string', id=None), '电脑-特性': Value(dtype='string', id=None), '电脑-色系': Value(dtype='string', id=None), '电脑-待机时长': Value(dtype='string', id=None), '电脑-硬盘容量': Value(dtype='string', id=None), '电脑-价格': Value(dtype='string', id=None), '火车-出发地': Value(dtype='string', id=None), '火车-目的地': Value(dtype='string', id=None), '火车-日期': Value(dtype='string', id=None), '火车-车型': Value(dtype='string', id=None), '火车-坐席': Value(dtype='string', id=None), '火车-车次信息': Value(dtype='string', id=None), '火车-时长': Value(dtype='string', id=None), '火车-出发时间': Value(dtype='string', id=None), '火车-到达时间': Value(dtype='string', id=None), '火车-票价': Value(dtype='string', id=None), '飞机-出发地': Value(dtype='string', id=None), '飞机-目的地': Value(dtype='string', id=None), '飞机-日期': Value(dtype='string', id=None), '飞机-舱位档次': Value(dtype='string', id=None), '飞机-航班信息': Value(dtype='string', id=None), '飞机-起飞时间': Value(dtype='string', id=None), '飞机-到达时间': Value(dtype='string', id=None), '飞机-票价': Value(dtype='string', id=None), '飞机-准点率': Value(dtype='string', id=None), '天气-城市': Value(dtype='string', id=None), '天气-日期': Value(dtype='string', id=None), '天气-天气': Value(dtype='string', id=None), '天气-温度': Value(dtype='string', id=None), '天气-风力风向': Value(dtype='string', id=None), '天气-紫外线强度': Value(dtype='string', id=None), '电影-制片国家/地区': Value(dtype='string', id=None), '电影-类型': Value(dtype='string', id=None), '电影-年代': Value(dtype='string', id=None), '电影-主演': Value(dtype='string', id=None), '电影-导演': Value(dtype='string', id=None), '电影-片名': Value(dtype='string', id=None), '电影-主演名单': Value(dtype='string', id=None), '电影-具体上映时间': Value(dtype='string', id=None), '电影-片长': Value(dtype='string', id=None), '电影-豆瓣评分': Value(dtype='string', id=None), '电视剧-制片国家/地区': Value(dtype='string', id=None), '电视剧-类型': Value(dtype='string', id=None), '电视剧-年代': Value(dtype='string', id=None), '电视剧-主演': Value(dtype='string', id=None), '电视剧-导演': Value(dtype='string', id=None), '电视剧-片名': Value(dtype='string', id=None), '电视剧-主演名单': Value(dtype='string', id=None), '电视剧-首播时间': Value(dtype='string', id=None), '电视剧-集数': Value(dtype='string', id=None), '电视剧-单集片长': Value(dtype='string', id=None), '电视剧-豆瓣评分': Value(dtype='string', id=None), '辅导班-班号': Value(dtype='string', id=None), '辅导班-难度': Value(dtype='string', id=None), '辅导班-科目': Value(dtype='string', id=None), '辅导班-年级': Value(dtype='string', id=None), '辅导班-区域': Value(dtype='string', id=None), '辅导班-校区': Value(dtype='string', id=None), '辅导班-上课方式': Value(dtype='string', id=None), '辅导班-开始日期': Value(dtype='string', id=None), '辅导班-结束日期': Value(dtype='string', id=None), '辅导班-每周': Value(dtype='string', id=None), '辅导班-上课时间': Value(dtype='string', id=None), '辅导班-下课时间': Value(dtype='string', id=None), '辅导班-时段': Value(dtype='string', id=None), '辅导班-课次': Value(dtype='string', id=None), '辅导班-课时': Value(dtype='string', id=None), '辅导班-教室地点': Value(dtype='string', id=None), '辅导班-教师': Value(dtype='string', id=None), '辅导班-价格': Value(dtype='string', id=None), '辅导班-课程网址': Value(dtype='string', id=None), '辅导班-教师网址': Value(dtype='string', id=None), '汽车-名称': Value(dtype='string', id=None), '汽车-车型': Value(dtype='string', id=None), '汽车-级别': Value(dtype='string', id=None), '汽车-座位数': Value(dtype='string', id=None), '汽车-车身尺寸(mm)': Value(dtype='string', id=None), '汽车-厂商': Value(dtype='string', id=None), '汽车-能源类型': Value(dtype='string', id=None), '汽车-发动机排量(L)': Value(dtype='string', id=None), '汽车-发动机马力(Ps)': Value(dtype='string', id=None), '汽车-驱动方式': Value(dtype='string', id=None), '汽车-综合油耗(L/100km)': Value(dtype='string', id=None), '汽车-环保标准': Value(dtype='string', id=None), '汽车-驾驶辅助影像': Value(dtype='string', id=None), '汽车-巡航系统': Value(dtype='string', id=None), '汽车-价格(万元)': Value(dtype='string', id=None), '汽车-车系': Value(dtype='string', id=None), '汽车-动力水平': Value(dtype='string', id=None), '汽车-油耗水平': Value(dtype='string', id=None), '汽车-倒车影像': Value(dtype='string', id=None), '汽车-定速巡航': Value(dtype='string', id=None), '汽车-座椅加热': Value(dtype='string', id=None), '汽车-座椅通风': Value(dtype='string', id=None), '汽车-所属价格区间': Value(dtype='string', id=None), '医院-名称': Value(dtype='string', id=None), '医院-等级': Value(dtype='string', id=None), '医院-类别': Value(dtype='string', id=None), '医院-性质': Value(dtype='string', id=None), '医院-区域': Value(dtype='string', id=None), '医院-地址': Value(dtype='string', id=None), '医院-电话': Value(dtype='string', id=None), '医院-挂号时间': Value(dtype='string', id=None), '医院-门诊时间': Value(dtype='string', id=None), '医院-公交线路': Value(dtype='string', id=None), '医院-地铁可达': Value(dtype='string', id=None), '医院-地铁线路': Value(dtype='string', id=None), '医院-重点科室': Value(dtype='string', id=None), '医院-CT': Value(dtype='string', id=None), '医院-3.0T MRI': Value(dtype='string', id=None), '医院-DSA': Value(dtype='string', id=None)} ``` </details> ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.1 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.10 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3637/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3637/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3636
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3636/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3636/comments
https://api.github.com/repos/huggingface/datasets/issues/3636/events
https://github.com/huggingface/datasets/pull/3636
1,115,362,702
PR_kwDODunzps4xohMB
3,636
Update index.rst
{ "login": "VioletteLepercq", "id": 95622912, "node_id": "U_kgDOBbMXAA", "avatar_url": "https://avatars.githubusercontent.com/u/95622912?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VioletteLepercq", "html_url": "https://github.com/VioletteLepercq", "followers_url": "https://api.github.com/users/VioletteLepercq/followers", "following_url": "https://api.github.com/users/VioletteLepercq/following{/other_user}", "gists_url": "https://api.github.com/users/VioletteLepercq/gists{/gist_id}", "starred_url": "https://api.github.com/users/VioletteLepercq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VioletteLepercq/subscriptions", "organizations_url": "https://api.github.com/users/VioletteLepercq/orgs", "repos_url": "https://api.github.com/users/VioletteLepercq/repos", "events_url": "https://api.github.com/users/VioletteLepercq/events{/privacy}", "received_events_url": "https://api.github.com/users/VioletteLepercq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2022-01-26T18:43:09
2022-01-26T18:44:55
2022-01-26T18:44:54
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3636", "html_url": "https://github.com/huggingface/datasets/pull/3636", "diff_url": "https://github.com/huggingface/datasets/pull/3636.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3636.patch", "merged_at": "2022-01-26T18:44:54" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3636/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3636/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3635
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3635/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3635/comments
https://api.github.com/repos/huggingface/datasets/issues/3635/events
https://github.com/huggingface/datasets/pull/3635
1,115,333,219
PR_kwDODunzps4xobAe
3,635
Make `ted_talks_iwslt` dataset streamable
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2022-01-26T18:07:56
2022-01-27T13:40:55
null
CONTRIBUTOR
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3635", "html_url": "https://github.com/huggingface/datasets/pull/3635", "diff_url": "https://github.com/huggingface/datasets/pull/3635.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3635.patch", "merged_at": null }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3635/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3635/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3634
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3634/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3634/comments
https://api.github.com/repos/huggingface/datasets/issues/3634/events
https://github.com/huggingface/datasets/issues/3634
1,115,133,279
I_kwDODunzps5Cd5Vf
3,634
Dataset.shuffle(seed=None) gives fixed row permutation
{ "login": "elisno", "id": 18127060, "node_id": "MDQ6VXNlcjE4MTI3MDYw", "avatar_url": "https://avatars.githubusercontent.com/u/18127060?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elisno", "html_url": "https://github.com/elisno", "followers_url": "https://api.github.com/users/elisno/followers", "following_url": "https://api.github.com/users/elisno/following{/other_user}", "gists_url": "https://api.github.com/users/elisno/gists{/gist_id}", "starred_url": "https://api.github.com/users/elisno/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elisno/subscriptions", "organizations_url": "https://api.github.com/users/elisno/orgs", "repos_url": "https://api.github.com/users/elisno/repos", "events_url": "https://api.github.com/users/elisno/events{/privacy}", "received_events_url": "https://api.github.com/users/elisno/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
2
2022-01-26T15:13:08
2022-01-27T18:16:07
2022-01-27T18:16:07
NONE
null
null
null
## Describe the bug Repeated attempts to `shuffle` a dataset without specifying a seed give the same results. ## Steps to reproduce the bug ```python import datasets # Some toy example data = datasets.Dataset.from_dict( {"feature": [1, 2, 3, 4, 5], "label": ["a", "b", "c", "d", "e"]} ) # Doesn't work as expected print("Shuffle dataset") for _ in range(3): print(data.shuffle(seed=None)[:]) # This seems to work with pandas print("\nShuffle via pandas") for _ in range(3): df = data.to_pandas().sample(frac=1.0) print(datasets.Dataset.from_pandas(df, preserve_index=False)[:]) ``` ## Expected results I assumed that the default setting would initialize a new/random state of a `np.random.BitGenerator` (see [docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=shuffle#datasets.Dataset.shuffle)). Wouldn't that reshuffle the rows each time I call `data.shuffle()`? ## Actual results ```bash Shuffle dataset {'feature': [5, 1, 3, 2, 4], 'label': ['e', 'a', 'c', 'b', 'd']} {'feature': [5, 1, 3, 2, 4], 'label': ['e', 'a', 'c', 'b', 'd']} {'feature': [5, 1, 3, 2, 4], 'label': ['e', 'a', 'c', 'b', 'd']} Shuffle via pandas {'feature': [4, 2, 3, 1, 5], 'label': ['d', 'b', 'c', 'a', 'e']} {'feature': [2, 5, 3, 4, 1], 'label': ['b', 'e', 'c', 'd', 'a']} {'feature': [5, 2, 3, 1, 4], 'label': ['e', 'b', 'c', 'a', 'd']} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.0 - Platform: Linux-5.13.0-27-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3634/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3634/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3633
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3633/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3633/comments
https://api.github.com/repos/huggingface/datasets/issues/3633/events
https://github.com/huggingface/datasets/pull/3633
1,115,040,174
PR_kwDODunzps4xng6E
3,633
Mirror canonical datasets in prod
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2022-01-26T13:49:37
2022-01-26T13:56:21
2022-01-26T13:56:21
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3633", "html_url": "https://github.com/huggingface/datasets/pull/3633", "diff_url": "https://github.com/huggingface/datasets/pull/3633.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3633.patch", "merged_at": "2022-01-26T13:56:21" }
Push the datasets changes to the Hub in production by setting `HF_USE_PROD=1` I also added a fix that makes the script ignore the json, csv, text, parquet and pandas dataset builders. cc @SBrandeis
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3633/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3633/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3632
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3632/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3632/comments
https://api.github.com/repos/huggingface/datasets/issues/3632/events
https://github.com/huggingface/datasets/issues/3632
1,115,027,185
I_kwDODunzps5Cdfbx
3,632
Adding CC-100: Monolingual Datasets from Web Crawl Data (Datasets links are invalid)
{ "login": "AnzorGozalishvili", "id": 55232459, "node_id": "MDQ6VXNlcjU1MjMyNDU5", "avatar_url": "https://avatars.githubusercontent.com/u/55232459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AnzorGozalishvili", "html_url": "https://github.com/AnzorGozalishvili", "followers_url": "https://api.github.com/users/AnzorGozalishvili/followers", "following_url": "https://api.github.com/users/AnzorGozalishvili/following{/other_user}", "gists_url": "https://api.github.com/users/AnzorGozalishvili/gists{/gist_id}", "starred_url": "https://api.github.com/users/AnzorGozalishvili/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AnzorGozalishvili/subscriptions", "organizations_url": "https://api.github.com/users/AnzorGozalishvili/orgs", "repos_url": "https://api.github.com/users/AnzorGozalishvili/repos", "events_url": "https://api.github.com/users/AnzorGozalishvili/events{/privacy}", "received_events_url": "https://api.github.com/users/AnzorGozalishvili/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
2
2022-01-26T13:35:37
2022-02-10T06:58:11
2022-02-10T06:58:11
CONTRIBUTOR
null
null
null
## Describe the bug The dataset links are no longer valid for CC-100. It seems that the website which was keeping these files are no longer accessible and therefore this dataset became unusable. Check out the dataset [homepage](http://data.statmt.org/cc-100/) which isn't accessible. Also the URLs for dataset file per language isn't accessible: http://data.statmt.org/cc-100/<language code here>.txt.xz (language codes: am, sr, ka, etc.) ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("cc100", "ka") ``` It throws 503 error. ## Expected results It should successfully download and load dataset but it throws an exception because the dataset files are no longer accessible. ## Environment info Run from google colab. Just installed the library using pip: ```!pip install -U datasets```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3632/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3632/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3631
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3631/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3631/comments
https://api.github.com/repos/huggingface/datasets/issues/3631/events
https://github.com/huggingface/datasets/issues/3631
1,114,833,662
I_kwDODunzps5CcwL-
3,631
Labels conflict when loading a local CSV file.
{ "login": "pichljan", "id": 8571301, "node_id": "MDQ6VXNlcjg1NzEzMDE=", "avatar_url": "https://avatars.githubusercontent.com/u/8571301?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pichljan", "html_url": "https://github.com/pichljan", "followers_url": "https://api.github.com/users/pichljan/followers", "following_url": "https://api.github.com/users/pichljan/following{/other_user}", "gists_url": "https://api.github.com/users/pichljan/gists{/gist_id}", "starred_url": "https://api.github.com/users/pichljan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pichljan/subscriptions", "organizations_url": "https://api.github.com/users/pichljan/orgs", "repos_url": "https://api.github.com/users/pichljan/repos", "events_url": "https://api.github.com/users/pichljan/events{/privacy}", "received_events_url": "https://api.github.com/users/pichljan/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
2022-01-26T10:00:33
2022-02-11T23:02:31
2022-02-11T23:02:31
NONE
null
null
null
## Describe the bug I am trying to load a local CSV file with a separate file containing label names. It is successfully loaded for the first time, but when I try to load it again, there is a conflict between provided labels and the cached dataset info. Disabling caching globally and/or using `download_mode="force_redownload"` did not help. ## Steps to reproduce the bug ```python load_dataset('csv', data_files='data/my_data.csv', features=Features(text=Value(dtype='string'), label=ClassLabel(names_file='data/my_data_labels.txt'))) ``` `my_data.csv` file has the following structure: ``` text,label "example1",0 "example2",1 ... ``` and the `my_data_labels.txt` looks like this: ``` label1 label2 ... ``` ## Expected results Successfully loaded dataset. ## Actual results ```python File "/usr/local/lib/python3.8/site-packages/datasets/load.py", line 1706, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/usr/local/lib/python3.8/site-packages/datasets/builder.py", line 766, in as_dataset datasets = utils.map_nested( File "/usr/local/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 261, in map_nested mapped = [ File "/usr/local/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 262, in <listcomp> _single_map_nested((function, obj, types, None, True)) File "/usr/local/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested return function(data_struct) File "/usr/local/lib/python3.8/site-packages/datasets/builder.py", line 797, in _build_single_dataset ds = self._as_dataset( File "/usr/local/lib/python3.8/site-packages/datasets/builder.py", line 872, in _as_dataset return Dataset(fingerprint=fingerprint, **dataset_kwargs) File "/usr/local/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 638, in __init__ inferred_features = Features.from_arrow_schema(arrow_table.schema) File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 1242, in from_arrow_schema return Features.from_dict(metadata["info"]["features"]) File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 1271, in from_dict obj = generate_from_dict(dic) File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 1076, in generate_from_dict return {key: generate_from_dict(value) for key, value in obj.items()} File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 1076, in <dictcomp> return {key: generate_from_dict(value) for key, value in obj.items()} File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 1083, in generate_from_dict return class_type(**{k: v for k, v in obj.items() if k in field_names}) File "<string>", line 7, in __init__ File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 776, in __post_init__ raise ValueError("Please provide either names or names_file but not both.") ValueError: Please provide either names or names_file but not both. ``` ## Environment info - `datasets` version: 1.18.0 - Python version: 3.8.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3631/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3631/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3630
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3630/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3630/comments
https://api.github.com/repos/huggingface/datasets/issues/3630/events
https://github.com/huggingface/datasets/issues/3630
1,114,578,625
I_kwDODunzps5Cbx7B
3,630
DuplicatedKeysError of NewsQA dataset
{ "login": "StevenTang1998", "id": 37647985, "node_id": "MDQ6VXNlcjM3NjQ3OTg1", "avatar_url": "https://avatars.githubusercontent.com/u/37647985?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StevenTang1998", "html_url": "https://github.com/StevenTang1998", "followers_url": "https://api.github.com/users/StevenTang1998/followers", "following_url": "https://api.github.com/users/StevenTang1998/following{/other_user}", "gists_url": "https://api.github.com/users/StevenTang1998/gists{/gist_id}", "starred_url": "https://api.github.com/users/StevenTang1998/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StevenTang1998/subscriptions", "organizations_url": "https://api.github.com/users/StevenTang1998/orgs", "repos_url": "https://api.github.com/users/StevenTang1998/repos", "events_url": "https://api.github.com/users/StevenTang1998/events{/privacy}", "received_events_url": "https://api.github.com/users/StevenTang1998/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
2022-01-26T03:05:49
2022-02-10T09:59:26
null
NONE
null
null
null
After processing the dataset following official [NewsQA](https://github.com/Maluuba/newsqa), I used datasets to load it: ``` a = load_dataset('newsqa', data_dir='news') ``` and the following error occurred: ``` Using custom data configuration default-data_dir=news Downloading and preparing dataset newsqa/default to /root/.cache/huggingface/datasets/newsqa/default-data_dir=news/1.0.0/b0b23e22d94a3d352ad9d75aff2b71375264a122fae301463079ee8595e05ab9... Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 1084, in _prepare_split writer.write(example, key) File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_writer.py", line 442, in write self.check_duplicate_keys() File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_writer.py", line 453, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: ./cnn/stories/6a0f9c8a5d0c6e8949b37924163c92923fe5770d.story Keys should be unique and deterministic in nature During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 1694, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 595, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 684, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 1086, in _prepare_split num_examples, num_bytes = writer.finalize() File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_writer.py", line 524, in finalize self.check_duplicate_keys() File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_writer.py", line 453, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: ./cnn/stories/6a0f9c8a5d0c6e8949b37924163c92923fe5770d.story Keys should be unique and deterministic in nature ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3630/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3630/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3629
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3629/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3629/comments
https://api.github.com/repos/huggingface/datasets/issues/3629/events
https://github.com/huggingface/datasets/pull/3629
1,113,971,575
PR_kwDODunzps4xkCZA
3,629
Fix Hub repos update when there's a new release
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2022-01-25T14:39:45
2022-01-25T14:55:46
2022-01-25T14:55:46
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3629", "html_url": "https://github.com/huggingface/datasets/pull/3629", "diff_url": "https://github.com/huggingface/datasets/pull/3629.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3629.patch", "merged_at": "2022-01-25T14:55:46" }
It was not listing the full list of datasets correctly cc @SBrandeis this is why it failed for 1.18.0 We should be good now !
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3629/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3629/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3628
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3628/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3628/comments
https://api.github.com/repos/huggingface/datasets/issues/3628/events
https://github.com/huggingface/datasets/issues/3628
1,113,930,644
I_kwDODunzps5CZTuU
3,628
Dataset Card Creator drops information for "Additional Information" Section
{ "login": "dennlinger", "id": 26013491, "node_id": "MDQ6VXNlcjI2MDEzNDkx", "avatar_url": "https://avatars.githubusercontent.com/u/26013491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dennlinger", "html_url": "https://github.com/dennlinger", "followers_url": "https://api.github.com/users/dennlinger/followers", "following_url": "https://api.github.com/users/dennlinger/following{/other_user}", "gists_url": "https://api.github.com/users/dennlinger/gists{/gist_id}", "starred_url": "https://api.github.com/users/dennlinger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dennlinger/subscriptions", "organizations_url": "https://api.github.com/users/dennlinger/orgs", "repos_url": "https://api.github.com/users/dennlinger/repos", "events_url": "https://api.github.com/users/dennlinger/events{/privacy}", "received_events_url": "https://api.github.com/users/dennlinger/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2022-01-25T14:06:17
2022-01-25T14:09:01
null
NONE
null
null
null
First of all, the card creator is a great addition and really helpful for streamlining dataset cards! ## Describe the bug I encountered an inconvenient bug when entering "Additional Information" in the react app, which drops already entered text when switching to a previous section, and then back again to "Additional Information". I was able to reproduce the issue in both Firefox and Chrome, so I suspect a problem with the React logic that doesn't expect users to switch back in the final section. Edit: I'm also not sure whether this is the right place to open the bug report on, since it's not clear to me which particular project it belongs to, or where I could find associated source code. ## Steps to reproduce the bug 1. Navigate to the Section "Additional Information" in the [dataset card creator](https://huggingface.co/datasets/card-creator/) 2. Enter text in an arbitrary field, e.g., "Dataset Curators". 3. Switch back to a previous section, like "Dataset Creation". 4. When switching back again to "Additional Information", the text has been deleted. Notably, this behavior can be reproduced again and again, it's not just problematic for the first "switch-back" from Additional Information. ## Expected results For step 4, the previously entered information should still be present in the boxes, similar to the behavior to all other sections (switching back there works as expected) ## Actual results The text boxes are empty again, and previously entered text got deleted. ## Environment info - `datasets` version: N/A - Platform: Firefox 96.0 / Chrome 97.0 - Python version: N/A - PyArrow version: N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3628/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3628/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3627
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3627/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3627/comments
https://api.github.com/repos/huggingface/datasets/issues/3627/events
https://github.com/huggingface/datasets/pull/3627
1,113,556,837
PR_kwDODunzps4xitGe
3,627
Fix host URL in The Pile datasets
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
2022-01-25T08:11:28
2022-02-12T12:59:17
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3627", "html_url": "https://github.com/huggingface/datasets/pull/3627", "diff_url": "https://github.com/huggingface/datasets/pull/3627.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3627.patch", "merged_at": null }
This PR fixes the host URL in The Pile datasets, once they have mirrored their data in another server. Fix #3626.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3627/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3627/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3626
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3626/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3626/comments
https://api.github.com/repos/huggingface/datasets/issues/3626/events
https://github.com/huggingface/datasets/issues/3626
1,113,534,436
I_kwDODunzps5CXy_k
3,626
The Pile cannot connect to host
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2022-01-25T07:43:33
2022-01-25T07:43:34
null
MEMBER
null
null
null
## Describe the bug The Pile had issues with their previous host server and have mirrored its content to another server. The new URL server should be updated.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3626/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3626/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3625
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3625/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3625/comments
https://api.github.com/repos/huggingface/datasets/issues/3625/events
https://github.com/huggingface/datasets/issues/3625
1,113,017,522
I_kwDODunzps5CV0yy
3,625
Add a metadata field for when source data was produced
{ "login": "davanstrien", "id": 8995957, "node_id": "MDQ6VXNlcjg5OTU5NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davanstrien", "html_url": "https://github.com/davanstrien", "followers_url": "https://api.github.com/users/davanstrien/followers", "following_url": "https://api.github.com/users/davanstrien/following{/other_user}", "gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}", "starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions", "organizations_url": "https://api.github.com/users/davanstrien/orgs", "repos_url": "https://api.github.com/users/davanstrien/repos", "events_url": "https://api.github.com/users/davanstrien/events{/privacy}", "received_events_url": "https://api.github.com/users/davanstrien/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
2
2022-01-24T18:52:39
2022-01-27T18:13:06
null
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** The current problem is that information about when source data was produced is not easily visible. Though there are a variety of metadata fields available in the dataset viewer, time period information is not included. This feature request suggests making metadata relating to the time that the underlying *source* data was produced more prominent and outlines why this specific information is of particular importance, both in domain-specific historic research and more broadly. **Describe the solution you'd like** There are a variety of metadata fields exposed in the dataset viewer (license, task categories, etc.) These fields make this metadata more prominent both for human users and as potentially machine-actionable information (for example, through the API). I would propose to add a metadata field that says when some underlying data was produced. For example, a dataset would be labelled as being produced between `1800-1900`. **Describe alternatives you've considered** This information is sometimes available in the Datacard or a paper describing the dataset. However, it's often not that easy to identify or extract this information, particularly if you want to use this field as a filter to identify relevant datasets. **Additional context** I believe this feature is relevant for a number of reasons: - Increasingly, there is an interest in using historical data for training language models (for example, https://huggingface.co/dbmdz/bert-base-historic-dutch-cased), and datasets to support this task (for example, https://huggingface.co/datasets/bnl_newspapers). For these datasets, indicating the time periods covered is particularly relevant. - More broadly, time is likely a common source of domain drift. Datasets of movie reviews from the 90s may not work well for recent movie reviews. As the documentation and long-term management of ML data become more of a priority, quickly understanding the time when the underlying text (or other data types) is arguably more important. - time-series data: datasets are adding more support for time series data. Again, the periods covered might be particularly relevant here. **open questions** - I think some of my points above apply not only to the underlying data but also to annotations. As a result, there could also be an argument for encoding this information somewhere. However, I would argue (but could be persuaded otherwise) that this is probably less important for filtering. This type of context is already addressed in the datasheets template and often requires more narrative to discuss. - what level of granularity would make sense for this? e.g. assigning a decade, century or year? - how to encode this information? What formatting makes sense - what specific time to encode; a data range? (mean, modal, min, max value?) This is a slightly amorphous feature request - I would be happy to discuss further/try and propose a more concrete solution if this seems like something that could be worth considering. I realise this might also touch on other parts of the 🤗 hubs ecosystem.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3625/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3625/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3623
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3623/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3623/comments
https://api.github.com/repos/huggingface/datasets/issues/3623/events
https://github.com/huggingface/datasets/pull/3623
1,112,835,239
PR_kwDODunzps4xgWig
3,623
Extend support for streaming datasets that use os.path.relpath
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2022-01-24T16:00:52
2022-02-04T14:03:55
2022-02-04T14:03:54
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3623", "html_url": "https://github.com/huggingface/datasets/pull/3623", "diff_url": "https://github.com/huggingface/datasets/pull/3623.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3623.patch", "merged_at": "2022-02-04T14:03:54" }
This PR extends the support in streaming mode for datasets that use `os.path.relpath`, by patching that function. This feature will also be useful to yield the relative path of audio or image files, within an archive or parent dir. Close #3622.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3623/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3623/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3622
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3622/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3622/comments
https://api.github.com/repos/huggingface/datasets/issues/3622/events
https://github.com/huggingface/datasets/issues/3622
1,112,831,661
I_kwDODunzps5CVHat
3,622
Extend support for streaming datasets that use os.path.relpath
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2022-01-24T15:58:23
2022-02-04T14:03:54
2022-02-04T14:03:54
MEMBER
null
null
null
Extend support for streaming datasets that use `os.path.relpath`. This feature will also be useful to yield the relative path of audio or image files.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3622/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3622/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3621
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3621/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3621/comments
https://api.github.com/repos/huggingface/datasets/issues/3621/events
https://github.com/huggingface/datasets/issues/3621
1,112,720,434
I_kwDODunzps5CUsQy
3,621
Consider adding `ipywidgets` as a dependency.
{ "login": "koaning", "id": 1019791, "node_id": "MDQ6VXNlcjEwMTk3OTE=", "avatar_url": "https://avatars.githubusercontent.com/u/1019791?v=4", "gravatar_id": "", "url": "https://api.github.com/users/koaning", "html_url": "https://github.com/koaning", "followers_url": "https://api.github.com/users/koaning/followers", "following_url": "https://api.github.com/users/koaning/following{/other_user}", "gists_url": "https://api.github.com/users/koaning/gists{/gist_id}", "starred_url": "https://api.github.com/users/koaning/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/koaning/subscriptions", "organizations_url": "https://api.github.com/users/koaning/orgs", "repos_url": "https://api.github.com/users/koaning/repos", "events_url": "https://api.github.com/users/koaning/events{/privacy}", "received_events_url": "https://api.github.com/users/koaning/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
2
2022-01-24T14:27:11
2022-01-24T15:14:15
null
NONE
null
null
null
When I install `datasets` in a fresh virtualenv with jupyterlab I always see this error. ``` ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html ``` It's a bit of a nuisance, because I need to run shut down the jupyterlab server in order to install the required dependency. Might it be an option to just include it as a dependency here?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3621/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3621/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3620
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3620/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3620/comments
https://api.github.com/repos/huggingface/datasets/issues/3620/events
https://github.com/huggingface/datasets/pull/3620
1,112,677,252
PR_kwDODunzps4xf1J3
3,620
Add Fon language tag
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2022-01-24T13:52:26
2022-02-04T14:04:36
2022-02-04T14:04:35
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3620", "html_url": "https://github.com/huggingface/datasets/pull/3620", "diff_url": "https://github.com/huggingface/datasets/pull/3620.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3620.patch", "merged_at": "2022-02-04T14:04:35" }
Add Fon language tag to resources.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3620/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3620/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3619
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3619/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3619/comments
https://api.github.com/repos/huggingface/datasets/issues/3619/events
https://github.com/huggingface/datasets/pull/3619
1,112,611,415
PR_kwDODunzps4xfnCQ
3,619
fix meta in mls
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2022-01-24T12:54:38
2022-01-24T20:53:22
2022-01-24T20:53:22
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3619", "html_url": "https://github.com/huggingface/datasets/pull/3619", "diff_url": "https://github.com/huggingface/datasets/pull/3619.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3619.patch", "merged_at": "2022-01-24T20:53:21" }
`monolingual` value of `m ultilinguality` param in yaml meta was changed to `multilingual` :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3619/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3619/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3618
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3618/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3618/comments
https://api.github.com/repos/huggingface/datasets/issues/3618/events
https://github.com/huggingface/datasets/issues/3618
1,112,123,365
I_kwDODunzps5CSafl
3,618
TIMIT Dataset not working with GPU
{ "login": "TheSeamau5", "id": 3227869, "node_id": "MDQ6VXNlcjMyMjc4Njk=", "avatar_url": "https://avatars.githubusercontent.com/u/3227869?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TheSeamau5", "html_url": "https://github.com/TheSeamau5", "followers_url": "https://api.github.com/users/TheSeamau5/followers", "following_url": "https://api.github.com/users/TheSeamau5/following{/other_user}", "gists_url": "https://api.github.com/users/TheSeamau5/gists{/gist_id}", "starred_url": "https://api.github.com/users/TheSeamau5/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TheSeamau5/subscriptions", "organizations_url": "https://api.github.com/users/TheSeamau5/orgs", "repos_url": "https://api.github.com/users/TheSeamau5/repos", "events_url": "https://api.github.com/users/TheSeamau5/events{/privacy}", "received_events_url": "https://api.github.com/users/TheSeamau5/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
3
2022-01-24T03:26:03
2022-01-27T13:17:51
null
NONE
null
null
null
## Describe the bug I am working trying to use the TIMIT dataset in order to fine-tune Wav2Vec2 model and I am unable to load the "audio" column from the dataset when working with a GPU. I am working on Amazon Sagemaker Studio, on the Python 3 (PyTorch 1.8 Python 3.6 GPU Optimized) environment, with a single ml.g4dn.xlarge instance (corresponds to a Tesla T4 GPU). I don't know if the issue is GPU related or Python environment related because everything works when I work off of the CPU Optimized environment with a non-GPU instance. My code also works on Google Colab with a GPU instance. This issue is blocking because I cannot get the 'audio' column in any way due to this error, which means that I can't pass it to any functions. I later use the dataset.map function and that is where I originally noticed this error. ## Steps to reproduce the bug ```python from datasets import load_dataset timit_train = load_dataset('timit_asr', split='train') print(timit_train['audio']) ``` ## Expected results Expected to see inside the 'audio' column, which contains an 'array' nested field with the array data I actually need. ## Actual results Traceback ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-ceeac555e921> in <module> ----> 1 timit_train['audio'] /opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key) 1917 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" 1918 return self._getitem( -> 1919 key, 1920 ) 1921 /opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs) 1902 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) 1903 formatted_output = format_table( -> 1904 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 1905 ) 1906 return formatted_output /opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns) 529 python_formatter = PythonFormatter(features=None) 530 if format_columns is None: --> 531 return formatter(pa_table, query_type=query_type) 532 elif query_type == "column": 533 if key in format_columns: /opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type) 280 return self.format_row(pa_table) 281 elif query_type == "column": --> 282 return self.format_column(pa_table) 283 elif query_type == "batch": 284 return self.format_batch(pa_table) /opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_column(self, pa_table) 315 column = self.python_arrow_extractor().extract_column(pa_table) 316 if self.decoded: --> 317 column = self.python_features_decoder.decode_column(column, pa_table.column_names[0]) 318 return column 319 /opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in decode_column(self, column, column_name) 221 222 def decode_column(self, column: list, column_name: str) -> list: --> 223 return self.features.decode_column(column, column_name) if self.features else column 224 225 def decode_batch(self, batch: dict) -> dict: /opt/conda/lib/python3.6/site-packages/datasets/features/features.py in decode_column(self, column, column_name) 1337 return ( 1338 [self[column_name].decode_example(value) if value is not None else None for value in column] -> 1339 if self._column_requires_decoding[column_name] 1340 else column 1341 ) /opt/conda/lib/python3.6/site-packages/datasets/features/features.py in <listcomp>(.0) 1336 """ 1337 return ( -> 1338 [self[column_name].decode_example(value) if value is not None else None for value in column] 1339 if self._column_requires_decoding[column_name] 1340 else column /opt/conda/lib/python3.6/site-packages/datasets/features/audio.py in decode_example(self, value) 85 dict 86 """ ---> 87 path, file = (value["path"], BytesIO(value["bytes"])) if value["bytes"] is not None else (value["path"], None) 88 if path is None and file is None: 89 raise ValueError(f"An audio sample should have one of 'path' or 'bytes' but both are None in {value}.") TypeError: string indices must be integers ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.0 - Platform: Linux-4.14.256-197.484.amzn2.x86_64-x86_64-with-debian-buster-sid - Python version: 3.6.13 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3618/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3618/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3617
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3617/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3617/comments
https://api.github.com/repos/huggingface/datasets/issues/3617/events
https://github.com/huggingface/datasets/pull/3617
1,111,938,691
PR_kwDODunzps4xdb8K
3,617
PR for the CFPB Consumer Complaints dataset
{ "login": "kayvane1", "id": 42403093, "node_id": "MDQ6VXNlcjQyNDAzMDkz", "avatar_url": "https://avatars.githubusercontent.com/u/42403093?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kayvane1", "html_url": "https://github.com/kayvane1", "followers_url": "https://api.github.com/users/kayvane1/followers", "following_url": "https://api.github.com/users/kayvane1/following{/other_user}", "gists_url": "https://api.github.com/users/kayvane1/gists{/gist_id}", "starred_url": "https://api.github.com/users/kayvane1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kayvane1/subscriptions", "organizations_url": "https://api.github.com/users/kayvane1/orgs", "repos_url": "https://api.github.com/users/kayvane1/repos", "events_url": "https://api.github.com/users/kayvane1/events{/privacy}", "received_events_url": "https://api.github.com/users/kayvane1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
8
2022-01-23T17:47:12
2022-02-07T21:08:31
2022-02-07T21:08:31
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3617", "html_url": "https://github.com/huggingface/datasets/pull/3617", "diff_url": "https://github.com/huggingface/datasets/pull/3617.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3617.patch", "merged_at": "2022-02-07T21:08:31" }
Think I followed all the steps but please let me know if anything needs changing or any improvements I can make to the code quality
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3617/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3617/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3616
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3616/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3616/comments
https://api.github.com/repos/huggingface/datasets/issues/3616/events
https://github.com/huggingface/datasets/pull/3616
1,111,587,861
PR_kwDODunzps4xcZMD
3,616
Make streamable the BnL Historical Newspapers dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2022-01-22T14:52:36
2022-02-04T14:05:23
2022-02-04T14:05:21
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3616", "html_url": "https://github.com/huggingface/datasets/pull/3616", "diff_url": "https://github.com/huggingface/datasets/pull/3616.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3616.patch", "merged_at": "2022-02-04T14:05:21" }
I've refactored the code in order to make the dataset streamable and to avoid it takes too long: - I've used `iter_files` Close #3615
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3616/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3616/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3615
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3615/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3615/comments
https://api.github.com/repos/huggingface/datasets/issues/3615/events
https://github.com/huggingface/datasets/issues/3615
1,111,576,876
I_kwDODunzps5CQVEs
3,615
Dataset BnL Historical Newspapers does not work in streaming mode
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
3
2022-01-22T14:12:59
2022-02-04T14:05:21
2022-02-04T14:05:21
MEMBER
null
null
null
## Describe the bug When trying to load in streaming mode, it "hangs"... ## Steps to reproduce the bug ```python ds = load_dataset("bnl_newspapers", split="train", streaming=True) ``` ## Expected results The code should be optimized, so that it works fast in streaming mode. CC: @davanstrien
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3615/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3615/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3614
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3614/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3614/comments
https://api.github.com/repos/huggingface/datasets/issues/3614/events
https://github.com/huggingface/datasets/pull/3614
1,110,736,657
PR_kwDODunzps4xZdCe
3,614
Minor fixes
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2022-01-21T17:48:44
2022-01-24T12:45:49
2022-01-24T12:45:49
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3614", "html_url": "https://github.com/huggingface/datasets/pull/3614", "diff_url": "https://github.com/huggingface/datasets/pull/3614.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3614.patch", "merged_at": "2022-01-24T12:45:49" }
This PR: * adds "desc" to the `ignore_kwargs` list in `Dataset.filter` * fixes the default value of `id` in `DatasetDict.prepare_for_task`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3614/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3614/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3613
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3613/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3613/comments
https://api.github.com/repos/huggingface/datasets/issues/3613/events
https://github.com/huggingface/datasets/issues/3613
1,110,684,015
I_kwDODunzps5CM7Fv
3,613
Files not updating in dataset viewer
{ "login": "abidlabs", "id": 1778297, "node_id": "MDQ6VXNlcjE3NzgyOTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1778297?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abidlabs", "html_url": "https://github.com/abidlabs", "followers_url": "https://api.github.com/users/abidlabs/followers", "following_url": "https://api.github.com/users/abidlabs/following{/other_user}", "gists_url": "https://api.github.com/users/abidlabs/gists{/gist_id}", "starred_url": "https://api.github.com/users/abidlabs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abidlabs/subscriptions", "organizations_url": "https://api.github.com/users/abidlabs/orgs", "repos_url": "https://api.github.com/users/abidlabs/repos", "events_url": "https://api.github.com/users/abidlabs/events{/privacy}", "received_events_url": "https://api.github.com/users/abidlabs/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
null
[]
null
2
2022-01-21T16:47:20
2022-01-22T08:13:13
2022-01-22T08:13:13
MEMBER
null
null
null
## Dataset viewer issue for '*name of the dataset*' **Link:** Some examples: * https://huggingface.co/datasets/abidlabs/crowdsourced-speech4 * https://huggingface.co/datasets/abidlabs/test-audio-13 *short description of the issue* It seems that the dataset viewer is reading a cached version of the dataset and it is not updating to reflect new files that are added to the dataset. I get this error: ![image](https://user-images.githubusercontent.com/1778297/150566660-30dc0dcd-18fd-4471-b70c-7c4bdc6a23c6.png) Am I the one who added this dataset? Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3613/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3613/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3612
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3612/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3612/comments
https://api.github.com/repos/huggingface/datasets/issues/3612/events
https://github.com/huggingface/datasets/pull/3612
1,110,506,466
PR_kwDODunzps4xYsvS
3,612
wikifix
{ "login": "apergo-ai", "id": 68908804, "node_id": "MDQ6VXNlcjY4OTA4ODA0", "avatar_url": "https://avatars.githubusercontent.com/u/68908804?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apergo-ai", "html_url": "https://github.com/apergo-ai", "followers_url": "https://api.github.com/users/apergo-ai/followers", "following_url": "https://api.github.com/users/apergo-ai/following{/other_user}", "gists_url": "https://api.github.com/users/apergo-ai/gists{/gist_id}", "starred_url": "https://api.github.com/users/apergo-ai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apergo-ai/subscriptions", "organizations_url": "https://api.github.com/users/apergo-ai/orgs", "repos_url": "https://api.github.com/users/apergo-ai/repos", "events_url": "https://api.github.com/users/apergo-ai/events{/privacy}", "received_events_url": "https://api.github.com/users/apergo-ai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2022-01-21T14:05:11
2022-02-03T17:58:16
2022-02-03T17:58:16
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3612", "html_url": "https://github.com/huggingface/datasets/pull/3612", "diff_url": "https://github.com/huggingface/datasets/pull/3612.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3612.patch", "merged_at": null }
This should get the wikipedia dataloading script back up and running - at least I hope so (tested with language ff and ii)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3612/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3612/timeline
null
https://api.github.com/repos/huggingface/datasets/issues/3611
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3611/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3611/comments
https://api.github.com/repos/huggingface/datasets/issues/3611/events
https://github.com/huggingface/datasets/issues/3611
1,110,399,096
I_kwDODunzps5CL1h4
3,611
Indexing bug after dataset.select()
{ "login": "kamalkraj", "id": 17096858, "node_id": "MDQ6VXNlcjE3MDk2ODU4", "avatar_url": "https://avatars.githubusercontent.com/u/17096858?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kamalkraj", "html_url": "https://github.com/kamalkraj", "followers_url": "https://api.github.com/users/kamalkraj/followers", "following_url": "https://api.github.com/users/kamalkraj/following{/other_user}", "gists_url": "https://api.github.com/users/kamalkraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/kamalkraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kamalkraj/subscriptions", "organizations_url": "https://api.github.com/users/kamalkraj/orgs", "repos_url": "https://api.github.com/users/kamalkraj/repos", "events_url": "https://api.github.com/users/kamalkraj/events{/privacy}", "received_events_url": "https://api.github.com/users/kamalkraj/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
1
2022-01-21T12:09:30
2022-01-27T18:16:22
2022-01-27T18:16:22
NONE
null
null
null
## Describe the bug A clear and concise description of what the bug is. Dataset indexing is not working as expected after `dataset.select(range(100))` ## Steps to reproduce the bug ```python # Sample code to reproduce the bug import datasets task_to_keys = { "cola": ("sentence", None), "mnli": ("premise", "hypothesis"), "mrpc": ("sentence1", "sentence2"), "qnli": ("question", "sentence"), "qqp": ("question1", "question2"), "rte": ("sentence1", "sentence2"), "sst2": ("sentence", None), "stsb": ("sentence1", "sentence2"), "wnli": ("sentence1", "sentence2"), } task_name = "sst2" raw_datasets = datasets.load_dataset("glue", task_name) train_dataset = raw_datasets["train"] print("before select: ",train_dataset[-2:]) # before select: {'sentence': ['a patient viewer ', 'this new jangle of noise , mayhem and stupidity must be a serious contender for the title . '], 'label': [1, 0], 'idx': [67347, 67348]} train_dataset = train_dataset.select(range(100)) print("after select: ",train_dataset[-2:]) # after select: {'sentence': [], 'label': [], 'idx': []} ``` link to colab: https://colab.research.google.com/drive/1LngeRC9f0jE7eSQ4Kh1cIeb411lRXQD-?usp=sharing ## Expected results A clear and concise description of the expected results. showing 98, 99 index data ## Actual results Specify the actual results or traceback. empty ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3611/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3611/timeline
null