url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.77B
1.82B
node_id
stringlengths
18
19
number
int64
5.98k
6.08k
title
stringlengths
5
280
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
null
comments
int64
0
13
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
3 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
10
7.17k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
1 value
https://api.github.com/repos/huggingface/datasets/issues/6080
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6080/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6080/comments
https://api.github.com/repos/huggingface/datasets/issues/6080/events
https://github.com/huggingface/datasets/pull/6080
1,822,667,554
PR_kwDODunzps5WdL4K
6,080
Remove README link to deprecated Colab notebook
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-07-26T15:27:49
2023-07-26T16:24:43
2023-07-26T16:14:34
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6080", "html_url": "https://github.com/huggingface/datasets/pull/6080", "diff_url": "https://github.com/huggingface/datasets/pull/6080.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6080.patch", "merged_at": "2023-07-26T16:14:34" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6080/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6080/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6079
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6079/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6079/comments
https://api.github.com/repos/huggingface/datasets/issues/6079/events
https://github.com/huggingface/datasets/issues/6079
1,822,597,471
I_kwDODunzps5soqFf
6,079
Iterating over DataLoader based on HF datasets is stuck forever
{ "login": "arindamsarkar93", "id": 5454868, "node_id": "MDQ6VXNlcjU0NTQ4Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/5454868?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arindamsarkar93", "html_url": "https://github.com/arindamsarkar93", "followers_url": "https://api.github.com/users/arindamsarkar93/followers", "following_url": "https://api.github.com/users/arindamsarkar93/following{/other_user}", "gists_url": "https://api.github.com/users/arindamsarkar93/gists{/gist_id}", "starred_url": "https://api.github.com/users/arindamsarkar93/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arindamsarkar93/subscriptions", "organizations_url": "https://api.github.com/users/arindamsarkar93/orgs", "repos_url": "https://api.github.com/users/arindamsarkar93/repos", "events_url": "https://api.github.com/users/arindamsarkar93/events{/privacy}", "received_events_url": "https://api.github.com/users/arindamsarkar93/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
11
2023-07-26T14:52:37
2023-07-26T19:14:16
null
NONE
null
null
null
### Describe the bug I am using Amazon Sagemaker notebook (Amazon Linux 2) with python 3.10 based Conda environment. I have a dataset in parquet format locally. When I try to iterate over it, the loader is stuck forever. Note that the same code is working for python 3.6 based conda environment seamlessly. What should be my next steps here? ### Steps to reproduce the bug ``` train_dataset = load_dataset( "parquet", data_files = {'train': tr_data_path + '*.parquet'}, split = 'train', collate_fn = streaming_data_collate_fn, streaming = True ).with_format('torch') train_dataloader = DataLoader(train_dataset, batch_size = 2, num_workers = 0) t = time.time() iter_ = 0 for batch in train_dataloader: iter_ += 1 if iter_ == 1000: break print (time.time() - t) ``` ### Expected behavior The snippet should work normally and load the next batch of data. ### Environment info datasets: '2.14.0' pyarrow: '12.0.0' torch: '2.0.0' Python: 3.10.10 | packaged by conda-forge | (main, Mar 24 2023, 20:08:06) [GCC 11.3.0] !uname -r 5.10.178-162.673.amzn2.x86_64
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6079/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6079/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6078
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6078/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6078/comments
https://api.github.com/repos/huggingface/datasets/issues/6078/events
https://github.com/huggingface/datasets/issues/6078
1,822,501,472
I_kwDODunzps5soSpg
6,078
resume_download with streaming=True
{ "login": "NicolasMICAUX", "id": 72763959, "node_id": "MDQ6VXNlcjcyNzYzOTU5", "avatar_url": "https://avatars.githubusercontent.com/u/72763959?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NicolasMICAUX", "html_url": "https://github.com/NicolasMICAUX", "followers_url": "https://api.github.com/users/NicolasMICAUX/followers", "following_url": "https://api.github.com/users/NicolasMICAUX/following{/other_user}", "gists_url": "https://api.github.com/users/NicolasMICAUX/gists{/gist_id}", "starred_url": "https://api.github.com/users/NicolasMICAUX/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NicolasMICAUX/subscriptions", "organizations_url": "https://api.github.com/users/NicolasMICAUX/orgs", "repos_url": "https://api.github.com/users/NicolasMICAUX/repos", "events_url": "https://api.github.com/users/NicolasMICAUX/events{/privacy}", "received_events_url": "https://api.github.com/users/NicolasMICAUX/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2023-07-26T14:08:22
2023-07-26T15:35:25
null
NONE
null
null
null
### Describe the bug I used: ``` dataset = load_dataset( "oscar-corpus/OSCAR-2201", token=True, language="fr", streaming=True, split="train" ) ``` Unfortunately, the server had a problem during the training process. I saved the step my training stopped at. But how can I resume download from step 1_000_´000 without re-streaming all the first 1 million docs of the dataset? `download_config=DownloadConfig(resume_download=True)` seems to not work with streaming=True. ### Steps to reproduce the bug ``` from datasets import load_dataset, DownloadConfig dataset = load_dataset( "oscar-corpus/OSCAR-2201", token=True, language="fr", streaming=True, # optional split="train", download_config=DownloadConfig(resume_download=True) ) # interupt the run and try to relaunch it => this restart from scratch ``` ### Expected behavior I would expect a parameter to start streaming from a given index in the dataset. ### Environment info - `datasets` version: 2.14.0 - Platform: Linux-5.19.0-45-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.1 - Pandas version: 2.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6078/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6078/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6077
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6077/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6077/comments
https://api.github.com/repos/huggingface/datasets/issues/6077/events
https://github.com/huggingface/datasets/issues/6077
1,822,486,810
I_kwDODunzps5soPEa
6,077
Mapping gets stuck at 99%
{ "login": "Laurent2916", "id": 21087104, "node_id": "MDQ6VXNlcjIxMDg3MTA0", "avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Laurent2916", "html_url": "https://github.com/Laurent2916", "followers_url": "https://api.github.com/users/Laurent2916/followers", "following_url": "https://api.github.com/users/Laurent2916/following{/other_user}", "gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}", "starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions", "organizations_url": "https://api.github.com/users/Laurent2916/orgs", "repos_url": "https://api.github.com/users/Laurent2916/repos", "events_url": "https://api.github.com/users/Laurent2916/events{/privacy}", "received_events_url": "https://api.github.com/users/Laurent2916/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2023-07-26T14:00:40
2023-07-26T18:29:10
null
CONTRIBUTOR
null
null
null
### Describe the bug Hi ! I'm currently working with a large (~150GB) unnormalized dataset at work. The dataset is available on a read-only filesystem internally, and I use a [loading script](https://huggingface.co/docs/datasets/dataset_script) to retreive it. I want to normalize the features of the dataset, meaning I need to compute the mean and standard deviation metric for each feature of the entire dataset. I cannot load the entire dataset to RAM as it is too big, so following [this discussion on the huggingface discourse](https://discuss.huggingface.co/t/copy-columns-in-a-dataset-and-compute-statistics-for-a-column/22157) I am using a [map operation](https://huggingface.co/docs/datasets/v2.14.0/en/package_reference/main_classes#datasets.Dataset.map) to first compute the metrics and a second map operation to apply them on the dataset. The problem lies in the second mapping, as it gets stuck at ~99%. By checking what the process does (using `htop` and `strace`) it seems to be doing a lot of I/O operations, and I'm not sure why. Obviously, I could always normalize the dataset externally and then load it using a loading script. However, since the internal dataset is updated fairly frequently, using the library to perform normalization automatically would make it much easier for me. ### Steps to reproduce the bug I'm able to reproduce the problem using the following scripts: ```python # random_data.py import datasets import torch _VERSION = "1.0.0" class RandomDataset(datasets.GeneratorBasedBuilder): def _info(self): return datasets.DatasetInfo( version=_VERSION, supervised_keys=None, features=datasets.Features( { "positions": datasets.Array2D( shape=(30000, 3), dtype="float32", ), "normals": datasets.Array2D( shape=(30000, 3), dtype="float32", ), "features": datasets.Array2D( shape=(30000, 6), dtype="float32", ), "scalars": datasets.Sequence( feature=datasets.Value("float32"), length=20, ), }, ), ) def _split_generators(self, dl_manager): return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, # type: ignore gen_kwargs={"nb_samples": 1000}, ), datasets.SplitGenerator( name=datasets.Split.TEST, # type: ignore gen_kwargs={"nb_samples": 100}, ), ] def _generate_examples(self, nb_samples: int): for idx in range(nb_samples): yield idx, { "positions": torch.rand(30000, 3), "normals": torch.rand(30000, 3), "features": torch.rand(30000, 6), "scalars": torch.rand(20), } ``` ```python # main.py import datasets import torch def compute_mean_std( dataset: datasets.Dataset, ) -> dict[str, torch.Tensor]: """Compute the mean and standard deviation of each feature of the dataset. Args: dataset (`Dataset`): A huggingface dataset. Returns: dict: A dictionary containing the mean and standard deviation of each feature. """ result = {} for key in dataset: # extract data from dataset data: torch.Tensor = dataset[key] # type: ignore # reshape data, from (a, ..., b, c) -> (*, c) data = data.reshape(-1, data.shape[-1]) # compute mean and std mean = data.mean(dim=0) # (c) std = data.std(dim=0) # (c) # store in result result[key] = torch.stack((mean, std)) return result def apply_mean_std( dataset: datasets.Dataset, mean_std: datasets.Dataset, ) -> dict[str, torch.Tensor]: """Normalize the dataset using the mean and standard deviation of each feature. Args: dataset (`Dataset`): A huggingface dataset. mean_std (`Dataset`): A huggingface dataset containing the mean and standard deviation of each feature. Returns: dict: A dictionary containing the normalized dataset. """ result = {} for key in mean_std.column_names: # extract data from dataset data: torch.Tensor = dataset[key] # type: ignore # extract mean and std from dict mean = mean_std[key][0] # type: ignore std = mean_std[key][1] # type: ignore # normalize data normalized_data = (data - mean) / std result[key] = normalized_data return result # hack to force the map function to use the entire dataset MAX_MAP_BATCH_SIZE = 1_000_000_000 # get dataset ds = datasets.load_dataset( path="random_data.py", split="train", ).with_format("torch") # compute mean/std of each feature mean_std = ds.map( desc="Computing mean/std", # type: ignore remove_columns=ds.column_names, # type: ignore function=compute_mean_std, batch_size=MAX_MAP_BATCH_SIZE, batched=True, ) # normalize each feature of the dataset ds_normalized = ds.map( desc="Applying mean/std", # type: ignore function=apply_mean_std, batched=False, fn_kwargs={ "mean_std": mean_std, }, ) ``` ### Expected behavior Using the previous scripts, the `ds_normalized` mapping completes in ~5 minutes, but any subsequent use of `ds_normalized` is really really slow, for example reapplying `apply_mean_std` to `ds_normalized` takes forever. This is very strange, I'm sure I must be missing something, but I would still expect this to be faster. ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-3.10.0-1160.66.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.10.12 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.0 - Pandas version: 2.0.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6077/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6077/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6076
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6076/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6076/comments
https://api.github.com/repos/huggingface/datasets/issues/6076/events
https://github.com/huggingface/datasets/pull/6076
1,822,345,597
PR_kwDODunzps5WcGVR
6,076
No gzip encoding from github
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
2023-07-26T12:46:07
2023-07-26T14:01:21
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6076", "html_url": "https://github.com/huggingface/datasets/pull/6076", "diff_url": "https://github.com/huggingface/datasets/pull/6076.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6076.patch", "merged_at": null }
Don't accept gzip encoding from github, otherwise some files are not streamable + seekable. fix https://huggingface.co/datasets/code_x_glue_cc_code_to_code_trans/discussions/2#64c0e0c1a04a514ba6303e84 and making sure https://github.com/huggingface/datasets/issues/2918 works as well
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6076/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6076/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6075
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6075/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6075/comments
https://api.github.com/repos/huggingface/datasets/issues/6075/events
https://github.com/huggingface/datasets/issues/6075
1,822,341,398
I_kwDODunzps5snrkW
6,075
Error loading music files using `load_dataset`
{ "login": "susnato", "id": 56069179, "node_id": "MDQ6VXNlcjU2MDY5MTc5", "avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4", "gravatar_id": "", "url": "https://api.github.com/users/susnato", "html_url": "https://github.com/susnato", "followers_url": "https://api.github.com/users/susnato/followers", "following_url": "https://api.github.com/users/susnato/following{/other_user}", "gists_url": "https://api.github.com/users/susnato/gists{/gist_id}", "starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/susnato/subscriptions", "organizations_url": "https://api.github.com/users/susnato/orgs", "repos_url": "https://api.github.com/users/susnato/repos", "events_url": "https://api.github.com/users/susnato/events{/privacy}", "received_events_url": "https://api.github.com/users/susnato/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-07-26T12:44:05
2023-07-26T13:08:08
2023-07-26T13:08:08
NONE
null
null
null
### Describe the bug I tried to load a music file using `datasets.load_dataset()` from the repository - https://huggingface.co/datasets/susnato/pop2piano_real_music_test I got the following error - ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2803, in __getitem__ return self._getitem(key) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2788, in _getitem formatted_output = format_table( File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 629, in format_table return formatter(pa_table, query_type=query_type) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 398, in __call__ return self.format_column(pa_table) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 442, in format_column column = self.python_features_decoder.decode_column(column, pa_table.column_names[0]) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 218, in decode_column return self.features.decode_column(column, column_name) if self.features else column File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1924, in decode_column [decode_nested_example(self[column_name], value) if value is not None else None for value in column] File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1924, in <listcomp> [decode_nested_example(self[column_name], value) if value is not None else None for value in column] File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1325, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/audio.py", line 184, in decode_example array, sampling_rate = sf.read(f) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 372, in read with SoundFile(file, 'r', samplerate, channels, File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 740, in __init__ self._file = self._open(file, mode_int, closefd) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 1264, in _open _error_check(_snd.sf_error(file_ptr), File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 1455, in _error_check raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace')) RuntimeError: Error opening <_io.BufferedReader name='/home/susnato/.cache/huggingface/datasets/downloads/d2b09cb974b967b13f91553297c40c0f02f3c0d4c8356350743598ff48d6f29e'>: Format not recognised. ``` ### Steps to reproduce the bug Code to reproduce the error - ```python from datasets import load_dataset ds = load_dataset("susnato/pop2piano_real_music_test", split="test") print(ds[0]) ``` ### Expected behavior I should be able to read the music file without any error. ### Environment info - `datasets` version: 2.14.0 - Platform: Linux-5.19.0-50-generic-x86_64-with-glibc2.35 - Python version: 3.9.16 - Huggingface_hub version: 0.15.1 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6075/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6075/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/6074
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6074/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6074/comments
https://api.github.com/repos/huggingface/datasets/issues/6074/events
https://github.com/huggingface/datasets/pull/6074
1,822,299,128
PR_kwDODunzps5Wb8O_
6,074
Misc doc improvements
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
2023-07-26T12:20:54
2023-07-26T14:42:56
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6074", "html_url": "https://github.com/huggingface/datasets/pull/6074", "diff_url": "https://github.com/huggingface/datasets/pull/6074.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6074.patch", "merged_at": null }
Removes the warning about requiring to write a dataset loading script to define multiple configurations, as the README YAML can be used instead (for simple cases). Also, deletes the section about using the `BatchSampler` in `torch<=1.12.1` to speed up loading, as `torch 1.12.1` is over a year old (and `torch 2.0` has been out for a while).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6074/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6074/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6073
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6073/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6073/comments
https://api.github.com/repos/huggingface/datasets/issues/6073/events
https://github.com/huggingface/datasets/issues/6073
1,822,167,804
I_kwDODunzps5snBL8
6,073
version2.3.2 load_dataset()data_files can't include .xxxx in path
{ "login": "BUAAChuanWang", "id": 45893496, "node_id": "MDQ6VXNlcjQ1ODkzNDk2", "avatar_url": "https://avatars.githubusercontent.com/u/45893496?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BUAAChuanWang", "html_url": "https://github.com/BUAAChuanWang", "followers_url": "https://api.github.com/users/BUAAChuanWang/followers", "following_url": "https://api.github.com/users/BUAAChuanWang/following{/other_user}", "gists_url": "https://api.github.com/users/BUAAChuanWang/gists{/gist_id}", "starred_url": "https://api.github.com/users/BUAAChuanWang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BUAAChuanWang/subscriptions", "organizations_url": "https://api.github.com/users/BUAAChuanWang/orgs", "repos_url": "https://api.github.com/users/BUAAChuanWang/repos", "events_url": "https://api.github.com/users/BUAAChuanWang/events{/privacy}", "received_events_url": "https://api.github.com/users/BUAAChuanWang/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2023-07-26T11:09:31
2023-07-26T12:34:45
null
NONE
null
null
null
### Describe the bug First, I cd workdir. Then, I just use load_dataset("json", data_file={"train":"/a/b/c/.d/train/train.json", "test":"/a/b/c/.d/train/test.json"}) that couldn't work and <FileNotFoundError: Unable to find '/a/b/c/.d/train/train.jsonl' at /a/b/c/.d/> And I debug, it is fine in version2.1.2 So there maybe a bug in path join. Here is the whole bug report: /x/datasets/loa │ │ d.py:1656 in load_dataset │ │ │ │ 1653 │ ignore_verifications = ignore_verifications or save_infos │ │ 1654 │ │ │ 1655 │ # Create a dataset builder │ │ ❱ 1656 │ builder_instance = load_dataset_builder( │ │ 1657 │ │ path=path, │ │ 1658 │ │ name=name, │ │ 1659 │ │ data_dir=data_dir, │ │ │ │ x/datasets/loa │ │ d.py:1439 in load_dataset_builder │ │ │ │ 1436 │ if use_auth_token is not None: │ │ 1437 │ │ download_config = download_config.copy() if download_config e │ │ 1438 │ │ download_config.use_auth_token = use_auth_token │ │ ❱ 1439 │ dataset_module = dataset_module_factory( │ │ 1440 │ │ path, │ │ 1441 │ │ revision=revision, │ │ 1442 │ │ download_config=download_config, │ │ │ │ x/datasets/loa │ │ d.py:1097 in dataset_module_factory │ │ │ │ 1094 │ │ │ 1095 │ # Try packaged │ │ 1096 │ if path in _PACKAGED_DATASETS_MODULES: │ │ ❱ 1097 │ │ return PackagedDatasetModuleFactory( │ │ 1098 │ │ │ path, │ │ 1099 │ │ │ data_dir=data_dir, │ │ 1100 │ │ │ data_files=data_files, │ │ │ │x/datasets/loa │ │ d.py:743 in get_module │ │ │ │ 740 │ │ │ if self.data_dir is not None │ │ 741 │ │ │ else get_patterns_locally(str(Path().resolve())) │ │ 742 │ │ ) │ │ ❱ 743 │ │ data_files = DataFilesDict.from_local_or_remote( │ │ 744 │ │ │ patterns, │ │ 745 │ │ │ use_auth_token=self.download_config.use_auth_token, │ │ 746 │ │ │ base_path=str(Path(self.data_dir).resolve()) if self.data │ │ │ │ x/datasets/dat │ │ a_files.py:590 in from_local_or_remote │ │ │ │ 587 │ │ out = cls() │ │ 588 │ │ for key, patterns_for_key in patterns.items(): │ │ 589 │ │ │ out[key] = ( │ │ ❱ 590 │ │ │ │ DataFilesList.from_local_or_remote( │ │ 591 │ │ │ │ │ patterns_for_key, │ │ 592 │ │ │ │ │ base_path=base_path, │ │ 593 │ │ │ │ │ allowed_extensions=allowed_extensions, │ │ │ │ /x/datasets/dat │ │ a_files.py:558 in from_local_or_remote │ │ │ │ 555 │ │ use_auth_token: Optional[Union[bool, str]] = None, │ │ 556 │ ) -> "DataFilesList": │ │ 557 │ │ base_path = base_path if base_path is not None else str(Path() │ │ ❱ 558 │ │ data_files = resolve_patterns_locally_or_by_urls(base_path, pa │ │ 559 │ │ origin_metadata = _get_origin_metadata_locally_or_by_urls(data │ │ 560 │ │ return cls(data_files, origin_metadata) │ │ 561 │ │ │ │ /x/datasets/dat │ │ a_files.py:195 in resolve_patterns_locally_or_by_urls │ │ │ │ 192 │ │ if is_remote_url(pattern): │ │ 193 │ │ │ data_files.append(Url(pattern)) │ │ 194 │ │ else: │ │ ❱ 195 │ │ │ for path in _resolve_single_pattern_locally(base_path, pat │ │ 196 │ │ │ │ data_files.append(path) │ │ 197 │ │ │ 198 │ if not data_files: │ │ │ │ /x/datasets/dat │ │ a_files.py:145 in _resolve_single_pattern_locally │ │ │ │ 142 │ │ error_msg = f"Unable to find '{pattern}' at {Path(base_path).r │ │ 143 │ │ if allowed_extensions is not None: │ │ 144 │ │ │ error_msg += f" with any supported extension {list(allowed │ │ ❱ 145 │ │ raise FileNotFoundError(error_msg) │ │ 146 │ return sorted(out) │ │ 147 ### Steps to reproduce the bug 1. Version=2.3.2 2. In shell, cd workdir.(cd /a/b/c/.d/) 3. load_dataset("json", data_file={"train":"/a/b/c/.d/train/train.json", "test":"/a/b/c/.d/train/test.json"}) ### Expected behavior fix it please~ ### Environment info 2.3.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6073/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6073/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6072
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6072/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6072/comments
https://api.github.com/repos/huggingface/datasets/issues/6072/events
https://github.com/huggingface/datasets/pull/6072
1,822,123,560
PR_kwDODunzps5WbWFN
6,072
Fix fsspec storage_options from load_dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
3
2023-07-26T10:44:23
2023-07-26T19:26:48
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6072", "html_url": "https://github.com/huggingface/datasets/pull/6072", "diff_url": "https://github.com/huggingface/datasets/pull/6072.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6072.patch", "merged_at": null }
close https://github.com/huggingface/datasets/issues/6071
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6072/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6072/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6071
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6071/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6071/comments
https://api.github.com/repos/huggingface/datasets/issues/6071/events
https://github.com/huggingface/datasets/issues/6071
1,821,990,749
I_kwDODunzps5smV9d
6,071
storage_options provided to load_dataset not fully piping through since datasets 2.14.0
{ "login": "exs-avianello", "id": 128361578, "node_id": "U_kgDOB6akag", "avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4", "gravatar_id": "", "url": "https://api.github.com/users/exs-avianello", "html_url": "https://github.com/exs-avianello", "followers_url": "https://api.github.com/users/exs-avianello/followers", "following_url": "https://api.github.com/users/exs-avianello/following{/other_user}", "gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}", "starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions", "organizations_url": "https://api.github.com/users/exs-avianello/orgs", "repos_url": "https://api.github.com/users/exs-avianello/repos", "events_url": "https://api.github.com/users/exs-avianello/events{/privacy}", "received_events_url": "https://api.github.com/users/exs-avianello/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
2023-07-26T09:37:20
2023-07-26T11:04:35
null
NONE
null
null
null
### Describe the bug Since the latest release of `datasets` (`2.14.0`), custom filesystem `storage_options` passed to `load_dataset()` do not seem to propagate through all the way - leading to problems if loading data files that need those options to be set. I think this is because of the new `_prepare_path_and_storage_options()` (https://github.com/huggingface/datasets/pull/6028), which returns the right `storage_options` to use given a path and a `DownloadConfig` - but which might not be taking into account the extra `storage_options` explicitly provided e.g. through `load_dataset()` ### Steps to reproduce the bug ```python import fsspec import pandas as pd import datasets # Generate mock parquet file data_files = "demo.parquet" pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}).to_parquet(data_files) _storage_options = {"x": 1, "y": 2} fs = fsspec.filesystem("file", **_storage_options) dataset = datasets.load_dataset( "parquet", data_files=data_files, storage_options=fs.storage_options ) ``` Looking at the `storage_options` resolved here: https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L331 they end up being `{}`, instead of propagating through the `storage_options` that were provided to `load_dataset` (`fs.storage_options`). As these then get used for the filesystem operation a few lines below https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L339 the call will fail if the user-provided `storage_options` were needed. --- A temporary workaround that seemed to work locally to bypass the problem was to bundle a duplicate of the `storage_options` into the `download_config`, so that they make their way all the way to `_prepare_path_and_storage_options()` and get extracted correctly: ```python dataset = datasets.load_dataset( "parquet", data_files=data_files, storage_options=fs.storage_options, download_config=datasets.DownloadConfig(storage_options={fs.protocol: fs.storage_options}), ) ``` ### Expected behavior `storage_options` provided to `load_dataset` take effect in all backend filesystem operations. ### Environment info datasets==2.14.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6071/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6071/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6070
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6070/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6070/comments
https://api.github.com/repos/huggingface/datasets/issues/6070/events
https://github.com/huggingface/datasets/pull/6070
1,820,836,330
PR_kwDODunzps5WXDLc
6,070
Fix Quickstart notebook link
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-07-25T17:48:37
2023-07-25T18:19:01
2023-07-25T18:10:16
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6070", "html_url": "https://github.com/huggingface/datasets/pull/6070", "diff_url": "https://github.com/huggingface/datasets/pull/6070.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6070.patch", "merged_at": "2023-07-25T18:10:16" }
Reported in https://github.com/huggingface/datasets/pull/5902#issuecomment-1649885621 (cc @alvarobartt)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6070/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6070/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6069
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6069/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6069/comments
https://api.github.com/repos/huggingface/datasets/issues/6069/events
https://github.com/huggingface/datasets/issues/6069
1,820,831,535
I_kwDODunzps5sh68v
6,069
KeyError: dataset has no key "image"
{ "login": "etetteh", "id": 28512232, "node_id": "MDQ6VXNlcjI4NTEyMjMy", "avatar_url": "https://avatars.githubusercontent.com/u/28512232?v=4", "gravatar_id": "", "url": "https://api.github.com/users/etetteh", "html_url": "https://github.com/etetteh", "followers_url": "https://api.github.com/users/etetteh/followers", "following_url": "https://api.github.com/users/etetteh/following{/other_user}", "gists_url": "https://api.github.com/users/etetteh/gists{/gist_id}", "starred_url": "https://api.github.com/users/etetteh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/etetteh/subscriptions", "organizations_url": "https://api.github.com/users/etetteh/orgs", "repos_url": "https://api.github.com/users/etetteh/repos", "events_url": "https://api.github.com/users/etetteh/events{/privacy}", "received_events_url": "https://api.github.com/users/etetteh/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
5
2023-07-25T17:45:50
2023-07-26T17:33:49
null
NONE
null
null
null
### Describe the bug I've loaded a local image dataset with: `ds = laod_dataset("imagefolder", data_dir=path-to-data)` And defined a transform to process the data, following the Datasets docs. However, I get a keyError error, indicating there's no "image" key in my dataset. When I printed out the example_batch sent to the transformation function, it shows only the labels are being sent to the function. For some reason, the images are not in the example batches. ### Steps to reproduce the bug I'm using the latest stable version of datasets ### Expected behavior I expect the example_batches to contain both images and labels ### Environment info I'm using the latest stable version of datasets
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6069/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6069/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6068
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6068/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6068/comments
https://api.github.com/repos/huggingface/datasets/issues/6068/events
https://github.com/huggingface/datasets/pull/6068
1,820,106,952
PR_kwDODunzps5WUkZi
6,068
fix tqdm lock deletion
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
5
2023-07-25T11:17:25
2023-07-25T15:29:39
2023-07-25T15:17:50
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6068", "html_url": "https://github.com/huggingface/datasets/pull/6068", "diff_url": "https://github.com/huggingface/datasets/pull/6068.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6068.patch", "merged_at": "2023-07-25T15:17:50" }
related to https://github.com/huggingface/datasets/issues/6066
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6068/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6068/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6067
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6067/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6067/comments
https://api.github.com/repos/huggingface/datasets/issues/6067/events
https://github.com/huggingface/datasets/pull/6067
1,819,919,025
PR_kwDODunzps5WT7EQ
6,067
fix tqdm lock
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-07-25T09:32:16
2023-07-25T10:02:43
2023-07-25T09:54:12
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6067", "html_url": "https://github.com/huggingface/datasets/pull/6067", "diff_url": "https://github.com/huggingface/datasets/pull/6067.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6067.patch", "merged_at": "2023-07-25T09:54:12" }
close https://github.com/huggingface/datasets/issues/6066
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6067/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6067/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6066
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6066/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6066/comments
https://api.github.com/repos/huggingface/datasets/issues/6066/events
https://github.com/huggingface/datasets/issues/6066
1,819,717,542
I_kwDODunzps5sdq-m
6,066
AttributeError: '_tqdm_cls' object has no attribute '_lock'
{ "login": "codingl2k1", "id": 138426806, "node_id": "U_kgDOCEA5tg", "avatar_url": "https://avatars.githubusercontent.com/u/138426806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/codingl2k1", "html_url": "https://github.com/codingl2k1", "followers_url": "https://api.github.com/users/codingl2k1/followers", "following_url": "https://api.github.com/users/codingl2k1/following{/other_user}", "gists_url": "https://api.github.com/users/codingl2k1/gists{/gist_id}", "starred_url": "https://api.github.com/users/codingl2k1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/codingl2k1/subscriptions", "organizations_url": "https://api.github.com/users/codingl2k1/orgs", "repos_url": "https://api.github.com/users/codingl2k1/repos", "events_url": "https://api.github.com/users/codingl2k1/events{/privacy}", "received_events_url": "https://api.github.com/users/codingl2k1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
7
2023-07-25T07:24:36
2023-07-26T10:56:25
2023-07-26T10:56:24
NONE
null
null
null
### Describe the bug ```python File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/load.py", line 1034, in get_module data_files = DataFilesDict.from_patterns( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 671, in from_patterns DataFilesList.from_patterns( File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 586, in from_patterns origin_metadata = _get_origin_metadata(data_files, download_config=download_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 502, in _get_origin_metadata return thread_map( ^^^^^^^^^^^ File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 70, in thread_map return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 48, in _executor_map with ensure_lock(tqdm_class, lock_name=lock_name) as lk: File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/contextlib.py", line 144, in __exit__ next(self.gen) File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 25, in ensure_lock del tqdm_class._lock ^^^^^^^^^^^^^^^^ AttributeError: '_tqdm_cls' object has no attribute '_lock' ``` ### Steps to reproduce the bug Happens ocasionally. ### Expected behavior I added a print in tqdm `ensure_lock()`, got a `ensure_lock <datasets.utils.logging._tqdm_cls object at 0x16dddead0> ` print. According to the code in https://github.com/tqdm/tqdm/blob/master/tqdm/contrib/concurrent.py#L24 ```python @contextmanager def ensure_lock(tqdm_class, lock_name=""): """get (create if necessary) and then restore `tqdm_class`'s lock""" print("ensure_lock", tqdm_class, lock_name) old_lock = getattr(tqdm_class, '_lock', None) # don't create a new lock lock = old_lock or tqdm_class.get_lock() # maybe create a new lock lock = getattr(lock, lock_name, lock) # maybe subtype tqdm_class.set_lock(lock) yield lock if old_lock is None: del tqdm_class._lock # <-- It tries to del the `_lock` attribute from tqdm_class. else: tqdm_class.set_lock(old_lock) ``` But, huggingface datasets `datasets.utils.logging._tqdm_cls` does not have the field `_lock`: https://github.com/huggingface/datasets/blob/main/src/datasets/utils/logging.py#L205 ```python class _tqdm_cls: def __call__(self, *args, disable=False, **kwargs): if _tqdm_active and not disable: return tqdm_lib.tqdm(*args, **kwargs) else: return EmptyTqdm(*args, **kwargs) def set_lock(self, *args, **kwargs): self._lock = None if _tqdm_active: return tqdm_lib.tqdm.set_lock(*args, **kwargs) def get_lock(self): if _tqdm_active: return tqdm_lib.tqdm.get_lock() ``` ### Environment info Python 3.11.4 tqdm '4.65.0' datasets master
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6066/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6066/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/6065
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6065/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6065/comments
https://api.github.com/repos/huggingface/datasets/issues/6065/events
https://github.com/huggingface/datasets/pull/6065
1,819,334,932
PR_kwDODunzps5WR8jI
6,065
Add column type guessing from map return function
{ "login": "piercefreeman", "id": 1712066, "node_id": "MDQ6VXNlcjE3MTIwNjY=", "avatar_url": "https://avatars.githubusercontent.com/u/1712066?v=4", "gravatar_id": "", "url": "https://api.github.com/users/piercefreeman", "html_url": "https://github.com/piercefreeman", "followers_url": "https://api.github.com/users/piercefreeman/followers", "following_url": "https://api.github.com/users/piercefreeman/following{/other_user}", "gists_url": "https://api.github.com/users/piercefreeman/gists{/gist_id}", "starred_url": "https://api.github.com/users/piercefreeman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/piercefreeman/subscriptions", "organizations_url": "https://api.github.com/users/piercefreeman/orgs", "repos_url": "https://api.github.com/users/piercefreeman/repos", "events_url": "https://api.github.com/users/piercefreeman/events{/privacy}", "received_events_url": "https://api.github.com/users/piercefreeman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
5
2023-07-25T00:34:17
2023-07-26T15:13:45
2023-07-26T15:13:44
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6065", "html_url": "https://github.com/huggingface/datasets/pull/6065", "diff_url": "https://github.com/huggingface/datasets/pull/6065.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6065.patch", "merged_at": null }
As discussed [here](https://github.com/huggingface/datasets/issues/5965), there are some cases where datasets is unable to automatically promote columns during mapping. The fix is to explicitly provide a `features` definition so pyarrow can configure itself with the right column types from the outset. This PR provides an alternative approach, which is functionally equivalent to specifying features but a bit cleaner within a larger mapping pipeline. It allows clients to typehint the return variable coming from the mapper function - if we find one of these type annotations specified, and no explicit features have been passed in, we'll try to convert it into a Features map. If the map function runs and casting is unable to succeed, it will raise a DatasetTransformationNotAllowedError that indicates the typehint may be to blame. It works for batched and non-batched mapping functions. Currently supported column types: - builtins primitives: string, int, float, bool - dictionaries, lists (nested and one-deep) - Optional types and None-Unions (synonymous with optional types) It's used like: ```python class DatasetTyped(TypedDict): texts: list[str] def dataset_typed_map(batch) -> DatasetTyped: return {"texts": [text.split() for text in batch["raw_text"]]} dataset = {"raw_text": ["", "This is a test", "This is another test"]} with Dataset.from_dict(dataset) as dset: new_dataset = dset.map( dataset_typed_map, batched=True, batch_size=1, num_proc=1, ) ``` Open questions: - Should logging indicate we have automatically guessed these types? Or proceed quietly until we hit an error (as is the current implementation).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6065/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6065/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6064
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6064/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6064/comments
https://api.github.com/repos/huggingface/datasets/issues/6064/events
https://github.com/huggingface/datasets/pull/6064
1,818,703,725
PR_kwDODunzps5WPzAv
6,064
set dev version
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-07-24T15:56:00
2023-07-24T16:05:19
2023-07-24T15:56:10
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6064", "html_url": "https://github.com/huggingface/datasets/pull/6064", "diff_url": "https://github.com/huggingface/datasets/pull/6064.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6064.patch", "merged_at": "2023-07-24T15:56:10" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6064/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6064/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6063
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6063/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6063/comments
https://api.github.com/repos/huggingface/datasets/issues/6063/events
https://github.com/huggingface/datasets/pull/6063
1,818,679,485
PR_kwDODunzps5WPtxi
6,063
Release: 2.14.0
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-07-24T15:41:19
2023-07-24T16:05:16
2023-07-24T15:47:51
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6063", "html_url": "https://github.com/huggingface/datasets/pull/6063", "diff_url": "https://github.com/huggingface/datasets/pull/6063.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6063.patch", "merged_at": "2023-07-24T15:47:51" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6063/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6063/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6062
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6062/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6062/comments
https://api.github.com/repos/huggingface/datasets/issues/6062/events
https://github.com/huggingface/datasets/pull/6062
1,818,341,584
PR_kwDODunzps5WOj62
6,062
Improve `Dataset.from_list` docstring
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-07-24T12:36:38
2023-07-24T14:43:48
2023-07-24T14:34:43
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6062", "html_url": "https://github.com/huggingface/datasets/pull/6062", "diff_url": "https://github.com/huggingface/datasets/pull/6062.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6062.patch", "merged_at": "2023-07-24T14:34:43" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6062/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6062/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6061
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6061/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6061/comments
https://api.github.com/repos/huggingface/datasets/issues/6061/events
https://github.com/huggingface/datasets/pull/6061
1,818,337,136
PR_kwDODunzps5WOi79
6,061
Dill 3.7 support
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
5
2023-07-24T12:33:58
2023-07-24T14:13:20
2023-07-24T14:04:36
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6061", "html_url": "https://github.com/huggingface/datasets/pull/6061", "diff_url": "https://github.com/huggingface/datasets/pull/6061.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6061.patch", "merged_at": "2023-07-24T14:04:36" }
Adds support for dill 3.7.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6061/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6061/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6060
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6060/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6060/comments
https://api.github.com/repos/huggingface/datasets/issues/6060/events
https://github.com/huggingface/datasets/issues/6060
1,816,614,120
I_kwDODunzps5sR1To
6,060
Dataset.map() execute twice when in PyTorch DDP mode
{ "login": "wanghaoyucn", "id": 39429965, "node_id": "MDQ6VXNlcjM5NDI5OTY1", "avatar_url": "https://avatars.githubusercontent.com/u/39429965?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wanghaoyucn", "html_url": "https://github.com/wanghaoyucn", "followers_url": "https://api.github.com/users/wanghaoyucn/followers", "following_url": "https://api.github.com/users/wanghaoyucn/following{/other_user}", "gists_url": "https://api.github.com/users/wanghaoyucn/gists{/gist_id}", "starred_url": "https://api.github.com/users/wanghaoyucn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wanghaoyucn/subscriptions", "organizations_url": "https://api.github.com/users/wanghaoyucn/orgs", "repos_url": "https://api.github.com/users/wanghaoyucn/repos", "events_url": "https://api.github.com/users/wanghaoyucn/events{/privacy}", "received_events_url": "https://api.github.com/users/wanghaoyucn/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
3
2023-07-22T05:06:43
2023-07-24T19:29:55
null
NONE
null
null
null
### Describe the bug I use `torchrun --standalone --nproc_per_node=2 train.py` to start training. And write the code following the [docs](https://huggingface.co/docs/datasets/process#distributed-usage). The trick about using `torch.distributed.barrier()` to only execute map at the main process doesn't always work. When I am training model, it will map twice. When I am running a test for dataset and dataloader (just print the batches), it can work. Their code about loading dataset are same. And on another server with 30 CPU cores, I use 2 GPUs and it can't work neither. I have tried to use `rank` and `local_rank` to check, they all didn't make sense. ### Steps to reproduce the bug use `torchrun --standalone --nproc_per_node=2 train.py` or `torchrun --standalone train.py` to run This is my code: ```python if args.distributed and world_size > 1: if args.local_rank > 0: print(f"Rank {args.rank}: Gpu {args.gpu} waiting for main process to perform the mapping", force=True) torch.distributed.barrier() print("Mapping dataset") dataset = dataset.map(lambda x: cut_reorder_keys(x, num_stations_list=args.num_stations_list, is_pad=True, is_train=True), num_proc=8, desc="cut_reorder_keys") dataset = dataset.map(lambda x: random_shift(x, shift_range=(-160, 0), feature_scale=16), num_proc=8, desc="random_shift") dataset_test = dataset_test.map(lambda x: cut_reorder_keys(x, num_stations_list=args.num_stations_list, is_pad=True, is_train=False), num_proc=8, desc="cut_reorder_keys") if args.local_rank == 0: print("Mapping finished, loading results from main process") torch.distributed.barrier() ``` ### Expected behavior Only the main process will execute `map`, while the sub process will load cache from disk. ### Environment info server with 64 CPU cores (AMD Ryzen Threadripper PRO 5995WX 64-Cores) and 2 RTX 4090 - `python==3.9.16` - `datasets==2.13.1` - `torch==2.0.1+cu117` - `22.04.1-Ubuntu` server with 30 CPU cores (Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz) and 2 RTX 4090 - `python==3.9.0` - `datasets==2.13.1` - `torch==2.0.1+cu117` - `Ubuntu 20.04`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6060/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6060/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6059
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6059/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6059/comments
https://api.github.com/repos/huggingface/datasets/issues/6059/events
https://github.com/huggingface/datasets/issues/6059
1,816,537,176
I_kwDODunzps5sRihY
6,059
Provide ability to load label mappings from file
{ "login": "david-waterworth", "id": 5028974, "node_id": "MDQ6VXNlcjUwMjg5NzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4", "gravatar_id": "", "url": "https://api.github.com/users/david-waterworth", "html_url": "https://github.com/david-waterworth", "followers_url": "https://api.github.com/users/david-waterworth/followers", "following_url": "https://api.github.com/users/david-waterworth/following{/other_user}", "gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}", "starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions", "organizations_url": "https://api.github.com/users/david-waterworth/orgs", "repos_url": "https://api.github.com/users/david-waterworth/repos", "events_url": "https://api.github.com/users/david-waterworth/events{/privacy}", "received_events_url": "https://api.github.com/users/david-waterworth/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
0
2023-07-22T02:04:19
2023-07-22T02:04:19
null
NONE
null
null
null
### Feature request My task is classification of a dataset containing a large label set that includes a hierarchy. Even ignoring the hierarchy I'm not able to find an example using `datasets` where the label names aren't hard-coded. This works find for classification of a handful of labels but ideally there would be a way of loading the name/id mappings required for `datasets.features.ClassLabel` from a file. It is possible to pass a file to ClassLabel but I cannot see an easy way of using this with `GeneratorBasedBuilder` since `self._info` is called before the `dl_manager` is constructed so even if my dataset contains say `label_mappings.json` there's no way of loading it in order to construct the `datasets.DatasetInfo` I can see other uses to accessing the `download_manager` from `self._info` - i.e. if the files contain a schema (i.e. `arrow` or `parquet` files) the `datasets.DatasetInfo` could be inferred. The workaround that was suggested in the forum is to generate a `.py` file from the `label_mappings.json` and import it. ``` class TestDatasetBuilder(datasets.GeneratorBasedBuilder): VERSION = datasets.Version("1.0.0") def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, features=datasets.Features( { "text": datasets.Value("string"), "label": datasets.features.ClassLabel(names=["label_1", "label_2"]), } ), task_templates=[TextClassification(text_column="text", label_column="label")], ) def _split_generators(self, dl_manager): train_path = dl_manager.download_and_extract(_TRAIN_DOWNLOAD_URL) test_path = dl_manager.download_and_extract(_TEST_DOWNLOAD_URL) return [ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train_path}), datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": test_path}), ] def _generate_examples(self, filepath): """Generate AG News examples.""" with open(filepath, encoding="utf-8") as csv_file: csv_reader = csv.DictReader(csv_file) for id_, row in enumerate(csv_reader): yield id_, row ``` ### Motivation Allow `datasets.DatasetInfo` to be generated based on the contents of the dataset. ### Your contribution I'm willing to work on a PR with guidence.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6059/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6059/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6058
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6058/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6058/comments
https://api.github.com/repos/huggingface/datasets/issues/6058/events
https://github.com/huggingface/datasets/issues/6058
1,815,131,397
I_kwDODunzps5sMLUF
6,058
laion-coco download error
{ "login": "yangyijune", "id": 54424110, "node_id": "MDQ6VXNlcjU0NDI0MTEw", "avatar_url": "https://avatars.githubusercontent.com/u/54424110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yangyijune", "html_url": "https://github.com/yangyijune", "followers_url": "https://api.github.com/users/yangyijune/followers", "following_url": "https://api.github.com/users/yangyijune/following{/other_user}", "gists_url": "https://api.github.com/users/yangyijune/gists{/gist_id}", "starred_url": "https://api.github.com/users/yangyijune/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yangyijune/subscriptions", "organizations_url": "https://api.github.com/users/yangyijune/orgs", "repos_url": "https://api.github.com/users/yangyijune/repos", "events_url": "https://api.github.com/users/yangyijune/events{/privacy}", "received_events_url": "https://api.github.com/users/yangyijune/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-07-21T04:24:15
2023-07-22T01:42:06
2023-07-22T01:42:06
NONE
null
null
null
### Describe the bug The full trace: ``` /home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/load.py:1744: FutureWarning: 'ignore_verifications' was de precated in favor of 'verification_mode' in version 2.9.1 and will be removed in 3.0.0. You can remove this warning by passing 'verification_mode=no_checks' instead. warnings.warn( Downloading and preparing dataset parquet/laion--laion-coco to /home/bian/.cache/huggingface/datasets/laion___parquet/laion-- laion-coco-cb4205d7f1863066/0.0.0/bcacc8bdaa0614a5d73d0344c813275e590940c6ea8bc569da462847103a1afd... Downloading data: 100%|█| 1.89G/1.89G [04:57<00:00, Downloading data files: 100%|█| 1/1 [04:59<00:00, 2 Extracting data files: 100%|█| 1/1 [00:00<00:00, 13 Generating train split: 0 examples [00:00, ? examples/s]<_io.BufferedReader name='/home/bian/.cache/huggingface/datasets/downlo ads/26d7a016d25bbd9443115cfa3092136e8eb2f1f5bcd4154 0cb9234572927f04c'> Traceback (most recent call last): File "/home/bian/data/ZOC/download_laion_coco.py", line 4, in <module> dataset = load_dataset("laion/laion-coco", ignore_verifications=True) File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset builder_instance.download_and_prepare( File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare self._download_and_prepare( File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 986, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 1748, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 1842, in _prepare_split_single generator = self._generate_tables(**gen_kwargs) File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 67, in _generate_tables parquet_file = pq.ParquetFile(f) File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/pyarrow/parquet/core.py", line 323, in __init__ self.reader.open( File "pyarrow/_parquet.pyx", line 1227, in pyarrow._parquet.ParquetReader.open File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file . ``` I have carefully followed the instructions in #5264 but still get the same error. Other helpful information: ``` ds = load_dataset("parquet", data_files= ...: "https://huggingface.co/datasets/laion/l ...: aion-coco/resolve/d22869de3ccd39dfec1507 ...: f7ded32e4a518dad24/part-00000-2256f782-1 ...: 26f-4dc6-b9c6-e6757637749d-c000.snappy.p ...: arquet") Found cached dataset parquet (/home/bian/.cache/huggingface/datasets/parquet/default-a02eea00aeb08b0e/0.0.0/bb8ccf89d9ee38581ff5e51506d721a9b37f14df8090dc9b2d8fb4a40957833f) 100%|██████████████| 1/1 [00:00<00:00, 4.55it/s] ``` ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("laion/laion-coco", ignore_verifications=True/False) ``` ### Expected behavior Properly load Laion-coco dataset ### Environment info datasets==2.11.0 torch==1.12.1 python 3.10
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6058/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6058/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/6057
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6057/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6057/comments
https://api.github.com/repos/huggingface/datasets/issues/6057/events
https://github.com/huggingface/datasets/issues/6057
1,815,100,151
I_kwDODunzps5sMDr3
6,057
Why is the speed difference of gen example so big?
{ "login": "pixeli99", "id": 46072190, "node_id": "MDQ6VXNlcjQ2MDcyMTkw", "avatar_url": "https://avatars.githubusercontent.com/u/46072190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pixeli99", "html_url": "https://github.com/pixeli99", "followers_url": "https://api.github.com/users/pixeli99/followers", "following_url": "https://api.github.com/users/pixeli99/following{/other_user}", "gists_url": "https://api.github.com/users/pixeli99/gists{/gist_id}", "starred_url": "https://api.github.com/users/pixeli99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pixeli99/subscriptions", "organizations_url": "https://api.github.com/users/pixeli99/orgs", "repos_url": "https://api.github.com/users/pixeli99/repos", "events_url": "https://api.github.com/users/pixeli99/events{/privacy}", "received_events_url": "https://api.github.com/users/pixeli99/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2023-07-21T03:34:49
2023-07-21T16:41:09
null
NONE
null
null
null
```python def _generate_examples(self, metadata_path, images_dir, conditioning_images_dir): with open(metadata_path, 'r') as file: metadata = json.load(file) for idx, item in enumerate(metadata): image_path = item.get('image_path') text_content = item.get('text_content') image_data = open(image_path, "rb").read() yield idx, { "text": text_content, "image": { "path": image_path, "bytes": image_data, }, "conditioning_image": { "path": image_path, "bytes": image_data, }, } ``` Hello, I use the above function to deal with my local data set, but I am very surprised that the speed at which I generate example is very different. When I start a training task, **sometimes 1000examples/s, sometimes only 10examples/s.** ![image](https://github.com/huggingface/datasets/assets/46072190/cdc17661-8267-4fd8-b30c-b74d505efd9b) I'm not saying that speed is changing all the time. I mean, the reading speed is different in different training, which will cause me to start training over and over again until the speed of this generation of examples is normal.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6057/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6057/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6056
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6056/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6056/comments
https://api.github.com/repos/huggingface/datasets/issues/6056/events
https://github.com/huggingface/datasets/pull/6056
1,815,086,963
PR_kwDODunzps5WD4RY
6,056
Implement proper checkpointing for dataset uploading with resume function that does not require remapping shards that have already been uploaded
{ "login": "AntreasAntoniou", "id": 10792502, "node_id": "MDQ6VXNlcjEwNzkyNTAy", "avatar_url": "https://avatars.githubusercontent.com/u/10792502?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AntreasAntoniou", "html_url": "https://github.com/AntreasAntoniou", "followers_url": "https://api.github.com/users/AntreasAntoniou/followers", "following_url": "https://api.github.com/users/AntreasAntoniou/following{/other_user}", "gists_url": "https://api.github.com/users/AntreasAntoniou/gists{/gist_id}", "starred_url": "https://api.github.com/users/AntreasAntoniou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AntreasAntoniou/subscriptions", "organizations_url": "https://api.github.com/users/AntreasAntoniou/orgs", "repos_url": "https://api.github.com/users/AntreasAntoniou/repos", "events_url": "https://api.github.com/users/AntreasAntoniou/events{/privacy}", "received_events_url": "https://api.github.com/users/AntreasAntoniou/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
3
2023-07-21T03:13:21
2023-07-24T15:17:28
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6056", "html_url": "https://github.com/huggingface/datasets/pull/6056", "diff_url": "https://github.com/huggingface/datasets/pull/6056.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6056.patch", "merged_at": null }
Context: issue #5990 In order to implement the checkpointing, I introduce a metadata folder that keeps one yaml file for each set that one is uploading. This yaml keeps track of what shards have already been uploaded, and which one the idx of the latest one was. Using this information I am then able to easily get the push_to_hub function to retrieve on demand past history of uploads and continue mapping and uploading from where it was left off.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6056/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6056/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6055
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6055/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6055/comments
https://api.github.com/repos/huggingface/datasets/issues/6055/events
https://github.com/huggingface/datasets/issues/6055
1,813,524,145
I_kwDODunzps5sGC6x
6,055
Fix host URL in The Pile datasets
{ "login": "nickovchinnikov", "id": 7540752, "node_id": "MDQ6VXNlcjc1NDA3NTI=", "avatar_url": "https://avatars.githubusercontent.com/u/7540752?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nickovchinnikov", "html_url": "https://github.com/nickovchinnikov", "followers_url": "https://api.github.com/users/nickovchinnikov/followers", "following_url": "https://api.github.com/users/nickovchinnikov/following{/other_user}", "gists_url": "https://api.github.com/users/nickovchinnikov/gists{/gist_id}", "starred_url": "https://api.github.com/users/nickovchinnikov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nickovchinnikov/subscriptions", "organizations_url": "https://api.github.com/users/nickovchinnikov/orgs", "repos_url": "https://api.github.com/users/nickovchinnikov/repos", "events_url": "https://api.github.com/users/nickovchinnikov/events{/privacy}", "received_events_url": "https://api.github.com/users/nickovchinnikov/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2023-07-20T09:08:52
2023-07-20T09:09:37
null
NONE
null
null
null
### Describe the bug In #3627 and #5543, you tried to fix the host URL in The Pile datasets. But both URLs are not working now: `HTTPError: 404 Client Error: Not Found for URL: https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst` And `ConnectTimeout: HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(, 'Connection to mystic.the-eye.eu timed out. (connect timeout=10.0)'))` ### Steps to reproduce the bug ``` from datasets import load_dataset # This takes a few minutes to run, so go grab a tea or coffee while you wait :) data_files = "https://mystic.the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst" pubmed_dataset = load_dataset("json", data_files=data_files, split="train") pubmed_dataset ``` Result: `ConnectTimeout: HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(, 'Connection to mystic.the-eye.eu timed out. (connect timeout=10.0)'))` And ``` from datasets import load_dataset # This takes a few minutes to run, so go grab a tea or coffee while you wait :) data_files = "https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst" pubmed_dataset = load_dataset("json", data_files=data_files, split="train") pubmed_dataset ``` Result: `HTTPError: 404 Client Error: Not Found for URL: https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst` ### Expected behavior Downloading as normal. ### Environment info Environment info `datasets` version: 2.9.0 Platform: Windows Python version: 3.9.13
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6055/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6055/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6054
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6054/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6054/comments
https://api.github.com/repos/huggingface/datasets/issues/6054/events
https://github.com/huggingface/datasets/issues/6054
1,813,271,304
I_kwDODunzps5sFFMI
6,054
Multi-processed `Dataset.map` slows down a lot when `import torch`
{ "login": "ShinoharaHare", "id": 47121592, "node_id": "MDQ6VXNlcjQ3MTIxNTky", "avatar_url": "https://avatars.githubusercontent.com/u/47121592?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ShinoharaHare", "html_url": "https://github.com/ShinoharaHare", "followers_url": "https://api.github.com/users/ShinoharaHare/followers", "following_url": "https://api.github.com/users/ShinoharaHare/following{/other_user}", "gists_url": "https://api.github.com/users/ShinoharaHare/gists{/gist_id}", "starred_url": "https://api.github.com/users/ShinoharaHare/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ShinoharaHare/subscriptions", "organizations_url": "https://api.github.com/users/ShinoharaHare/orgs", "repos_url": "https://api.github.com/users/ShinoharaHare/repos", "events_url": "https://api.github.com/users/ShinoharaHare/events{/privacy}", "received_events_url": "https://api.github.com/users/ShinoharaHare/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" } ]
closed
false
null
[]
null
1
2023-07-20T06:36:14
2023-07-21T15:19:37
2023-07-21T15:19:37
NONE
null
null
null
### Describe the bug When using `Dataset.map` with `num_proc > 1`, the speed slows down much if I add `import torch` to the start of the script even though I don't use it. I'm not sure if it's `torch` only or if any other package that is "large" will also cause the same result. BTW, `import lightning` also slows it down. Below are the progress bars of `Dataset.map`, the only difference between them is with or without `import torch`, but the speed varies by 6-7 times. - without `import torch` ![image](https://github.com/huggingface/datasets/assets/47121592/0233055a-ced4-424a-9f0f-32a2afd802c2) - with `import torch` ![image](https://github.com/huggingface/datasets/assets/47121592/463eafb7-b81e-4eb9-91ca-fd7fe20f3d59) ### Steps to reproduce the bug Below is the code I used, but I don't think the dataset and the mapping function have much to do with the phenomenon. ```python3 from datasets import load_from_disk, disable_caching from transformers import AutoTokenizer # import torch # import lightning def rearrange_datapoints( batch, tokenizer, sequence_length, ): datapoints = [] input_ids = [] for x in batch['input_ids']: input_ids += x while len(input_ids) >= sequence_length: datapoint = input_ids[:sequence_length] datapoints.append(datapoint) input_ids[:sequence_length] = [] if input_ids: paddings = [-1] * (sequence_length - len(input_ids)) datapoint = paddings + input_ids if tokenizer.padding_side == 'left' else input_ids + paddings datapoints.append(datapoint) batch['input_ids'] = datapoints return batch if __name__ == '__main__': disable_caching() tokenizer = AutoTokenizer.from_pretrained('...', use_fast=False) dataset = load_from_disk('...') dataset = dataset.map( rearrange_datapoints, fn_kwargs=dict( tokenizer=tokenizer, sequence_length=2048, ), batched=True, num_proc=8, ) ``` ### Expected behavior The multi-processed `Dataset.map` function speed between with and without `import torch` should be the same. ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - PyArrow version: 12.0.0 - Pandas version: 2.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6054/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6054/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/6053
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6053/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6053/comments
https://api.github.com/repos/huggingface/datasets/issues/6053/events
https://github.com/huggingface/datasets/issues/6053
1,812,635,902
I_kwDODunzps5sCqD-
6,053
Change package name from "datasets" to something less generic
{ "login": "geajack", "id": 2124157, "node_id": "MDQ6VXNlcjIxMjQxNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/2124157?v=4", "gravatar_id": "", "url": "https://api.github.com/users/geajack", "html_url": "https://github.com/geajack", "followers_url": "https://api.github.com/users/geajack/followers", "following_url": "https://api.github.com/users/geajack/following{/other_user}", "gists_url": "https://api.github.com/users/geajack/gists{/gist_id}", "starred_url": "https://api.github.com/users/geajack/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/geajack/subscriptions", "organizations_url": "https://api.github.com/users/geajack/orgs", "repos_url": "https://api.github.com/users/geajack/repos", "events_url": "https://api.github.com/users/geajack/events{/privacy}", "received_events_url": "https://api.github.com/users/geajack/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
0
2023-07-19T19:53:28
2023-07-19T19:55:04
null
NONE
null
null
null
### Feature request I'm repeatedly finding myself in situations where I want to have a package called `datasets.py` or `evaluate.py` in my code and can't because those names are being taken up by Huggingface packages. While I can understand how (even from the user's perspective) it's aesthetically pleasing to have nice terse library names, ultimately a library hogging simple names like this is something I find short-sighted, impractical and at my most irritable, frankly rude. My preference would be a pattern like what you get with all the other big libraries like numpy or pandas: ``` import huggingface as hf # hf.transformers, hf.datasets, hf.evaluate ``` or things like ``` import huggingface.transformers as tf # tf.load_model(), etc ``` If this isn't possible for some technical reason, at least just call the packages something like `hf_transformers` and so on. I realize this is a very big change that's probably been discussed internally already, but I'm making this issue and sister issues on each huggingface project just to start the conversation and begin tracking community feeling on the matter, since I suspect I'm not the only one who feels like this. Sorry if this has been requested already on this issue tracker, I couldn't find anything looking for terms like "package name". Sister issues: - [transformers](https://github.com/huggingface/transformers/issues/24934) - **datasets** - [evaluate](https://github.com/huggingface/evaluate/issues/476) ### Motivation Not taking up package names the user is likely to want to use. ### Your contribution No - more a matter of internal discussion among core library authors.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6053/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6053/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6052
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6052/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6052/comments
https://api.github.com/repos/huggingface/datasets/issues/6052/events
https://github.com/huggingface/datasets/pull/6052
1,812,145,100
PR_kwDODunzps5V5yOi
6,052
Remove `HfFileSystem` and deprecate `S3FileSystem`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
10
2023-07-19T15:00:01
2023-07-19T17:39:11
2023-07-19T17:27:17
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6052", "html_url": "https://github.com/huggingface/datasets/pull/6052", "diff_url": "https://github.com/huggingface/datasets/pull/6052.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6052.patch", "merged_at": "2023-07-19T17:27:17" }
Remove the legacy `HfFileSystem` and deprecate `S3FileSystem` cc @philschmid for the SageMaker scripts/notebooks that still use `datasets`' `S3FileSystem`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6052/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6052/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6051
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6051/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6051/comments
https://api.github.com/repos/huggingface/datasets/issues/6051/events
https://github.com/huggingface/datasets/issues/6051
1,811,549,650
I_kwDODunzps5r-g3S
6,051
Skipping shard in the remote repo and resume upload
{ "login": "rs9000", "id": 9029817, "node_id": "MDQ6VXNlcjkwMjk4MTc=", "avatar_url": "https://avatars.githubusercontent.com/u/9029817?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rs9000", "html_url": "https://github.com/rs9000", "followers_url": "https://api.github.com/users/rs9000/followers", "following_url": "https://api.github.com/users/rs9000/following{/other_user}", "gists_url": "https://api.github.com/users/rs9000/gists{/gist_id}", "starred_url": "https://api.github.com/users/rs9000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rs9000/subscriptions", "organizations_url": "https://api.github.com/users/rs9000/orgs", "repos_url": "https://api.github.com/users/rs9000/repos", "events_url": "https://api.github.com/users/rs9000/events{/privacy}", "received_events_url": "https://api.github.com/users/rs9000/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-07-19T09:25:26
2023-07-20T18:16:01
2023-07-20T18:16:00
NONE
null
null
null
### Describe the bug For some reason when I try to resume the upload of my dataset, it is very slow to reach the index of the shard from which to resume the uploading. From my understanding, the problem is in this part of the code: arrow_dataset.py ```python for index, shard in logging.tqdm( enumerate(itertools.chain([first_shard], shards_iter)), desc="Pushing dataset shards to the dataset hub", total=num_shards, disable=not logging.is_progress_bar_enabled(), ): shard_path_in_repo = path_in_repo(index, shard) # Upload a shard only if it doesn't already exist in the repository if shard_path_in_repo not in data_files: ``` In particular, iterating the generator is slow during the call: ```python self._select_contiguous(start, length, new_fingerprint=new_fingerprint) ``` I wonder if it is possible to avoid calling this function for shards that are already uploaded and just start from the correct shard index. ### Steps to reproduce the bug 1. Start the upload ```python dataset = load_dataset("imagefolder", data_dir=DATA_DIR, split="train", drop_labels=True) dataset.push_to_hub("repo/name") ``` 2. Stop and restart the upload after hundreds of shards ### Expected behavior Skip the uploaded shards faster. ### Environment info - `datasets` version: 2.5.1 - Platform: Linux-4.18.0-193.el8.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.16 - PyArrow version: 12.0.1 - Pandas version: 2.0.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6051/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6051/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/6049
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6049/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6049/comments
https://api.github.com/repos/huggingface/datasets/issues/6049/events
https://github.com/huggingface/datasets/pull/6049
1,810,378,706
PR_kwDODunzps5Vz1pd
6,049
Update `ruff` version in pre-commit config
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2023-07-18T17:13:50
2023-07-20T12:09:16
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6049", "html_url": "https://github.com/huggingface/datasets/pull/6049", "diff_url": "https://github.com/huggingface/datasets/pull/6049.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6049.patch", "merged_at": null }
so that it corresponds to the one that is being run in CI
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6049/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6049/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6048
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6048/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6048/comments
https://api.github.com/repos/huggingface/datasets/issues/6048/events
https://github.com/huggingface/datasets/issues/6048
1,809,629,346
I_kwDODunzps5r3MCi
6,048
when i use datasets.load_dataset, i encounter the http connect error!
{ "login": "yangy1992", "id": 137855591, "node_id": "U_kgDOCDeCZw", "avatar_url": "https://avatars.githubusercontent.com/u/137855591?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yangy1992", "html_url": "https://github.com/yangy1992", "followers_url": "https://api.github.com/users/yangy1992/followers", "following_url": "https://api.github.com/users/yangy1992/following{/other_user}", "gists_url": "https://api.github.com/users/yangy1992/gists{/gist_id}", "starred_url": "https://api.github.com/users/yangy1992/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yangy1992/subscriptions", "organizations_url": "https://api.github.com/users/yangy1992/orgs", "repos_url": "https://api.github.com/users/yangy1992/repos", "events_url": "https://api.github.com/users/yangy1992/events{/privacy}", "received_events_url": "https://api.github.com/users/yangy1992/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-07-18T10:16:34
2023-07-18T16:18:39
2023-07-18T16:18:39
NONE
null
null
null
### Describe the bug `common_voice_test = load_dataset("audiofolder", data_dir="./dataset/",cache_dir="./cache",split=datasets.Split.TEST)` when i run the code above, i got the error as below: -------------------------------------------- ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.3.2/datasets/audiofolder/audiofolder.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.3.2/datasets/audiofolder/audiofolder.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f299ed082e0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))"))) -------------------------------------------------- My all data is on local machine, why does it need to connect the internet? how can i fix it, because my machine cannot connect the internet. ### Steps to reproduce the bug 1 ### Expected behavior no error when i use the load_dataset func ### Environment info python=3.8.15
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6048/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6048/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/6047
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6047/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6047/comments
https://api.github.com/repos/huggingface/datasets/issues/6047/events
https://github.com/huggingface/datasets/pull/6047
1,809,627,947
PR_kwDODunzps5VxRLA
6,047
Bump dev version
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-07-18T10:15:39
2023-07-18T10:28:01
2023-07-18T10:15:52
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6047", "html_url": "https://github.com/huggingface/datasets/pull/6047", "diff_url": "https://github.com/huggingface/datasets/pull/6047.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6047.patch", "merged_at": "2023-07-18T10:15:52" }
workaround to fix an issue with transformers CI https://github.com/huggingface/transformers/pull/24867#discussion_r1266519626
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6047/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6047/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6046
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6046/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6046/comments
https://api.github.com/repos/huggingface/datasets/issues/6046/events
https://github.com/huggingface/datasets/issues/6046
1,808,154,414
I_kwDODunzps5rxj8u
6,046
Support proxy and user-agent in fsspec calls
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 3761482852, "node_id": "LA_kwDODunzps7gM6xk", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue", "name": "good second issue", "color": "BDE59C", "default": false, "description": "Issues a bit more difficult than \"Good First\" issues" } ]
open
false
null
[]
null
0
2023-07-17T16:39:26
2023-07-17T16:40:37
null
MEMBER
null
null
null
Since we switched to the new HfFileSystem we no longer apply user's proxy and user-agent. Using the HTTP_PROXY and HTTPS_PROXY environment variables works though since we use aiohttp to call the HF Hub. This can be implemented in `_prepare_single_hop_path_and_storage_options`. Though ideally the `HfFileSystem` could support passing at least the proxies
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6046/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6046/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6045
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6045/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6045/comments
https://api.github.com/repos/huggingface/datasets/issues/6045/events
https://github.com/huggingface/datasets/pull/6045
1,808,072,270
PR_kwDODunzps5Vr-r1
6,045
Check if column names match in Parquet loader only when config `features` are specified
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
8
2023-07-17T15:50:15
2023-07-24T14:45:56
2023-07-24T14:35:03
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6045", "html_url": "https://github.com/huggingface/datasets/pull/6045", "diff_url": "https://github.com/huggingface/datasets/pull/6045.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6045.patch", "merged_at": "2023-07-24T14:35:03" }
Fix #6039
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6045/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6045/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6044
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6044/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6044/comments
https://api.github.com/repos/huggingface/datasets/issues/6044/events
https://github.com/huggingface/datasets/pull/6044
1,808,057,906
PR_kwDODunzps5Vr7jr
6,044
Rename "pattern" to "path" in YAML data_files configs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
10
2023-07-17T15:41:16
2023-07-19T16:59:55
2023-07-19T16:48:06
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6044", "html_url": "https://github.com/huggingface/datasets/pull/6044", "diff_url": "https://github.com/huggingface/datasets/pull/6044.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6044.patch", "merged_at": "2023-07-19T16:48:06" }
To make it easier to understand for users. They can use "path" to specify a single path, <s>or "paths" to use a list of paths.</s> Glob patterns are still supported though
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6044/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6044/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6043
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6043/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6043/comments
https://api.github.com/repos/huggingface/datasets/issues/6043/events
https://github.com/huggingface/datasets/issues/6043
1,807,771,750
I_kwDODunzps5rwGhm
6,043
Compression kwargs have no effect when saving datasets as csv
{ "login": "exs-avianello", "id": 128361578, "node_id": "U_kgDOB6akag", "avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4", "gravatar_id": "", "url": "https://api.github.com/users/exs-avianello", "html_url": "https://github.com/exs-avianello", "followers_url": "https://api.github.com/users/exs-avianello/followers", "following_url": "https://api.github.com/users/exs-avianello/following{/other_user}", "gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}", "starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions", "organizations_url": "https://api.github.com/users/exs-avianello/orgs", "repos_url": "https://api.github.com/users/exs-avianello/repos", "events_url": "https://api.github.com/users/exs-avianello/events{/privacy}", "received_events_url": "https://api.github.com/users/exs-avianello/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
3
2023-07-17T13:19:21
2023-07-22T17:34:18
null
NONE
null
null
null
### Describe the bug Attempting to save a dataset as a compressed csv file, the compression kwargs provided to `.to_csv()` that get piped to panda's `pandas.DataFrame.to_csv` do not have any effect - resulting in the dataset not getting compressed. A warning is raised if explicitly providing a `compression` kwarg, but no warnings are raised if relying on the defaults. This can lead to datasets secretly not getting compressed for users expecting the behaviour to match panda's `.to_csv()`, where the compression format is automatically inferred from the destination path suffix. ### Steps to reproduce the bug ```python # dataset is not compressed (but at least a warning is emitted) import datasets dataset = datasets.load_dataset("rotten_tomatoes", split="train") dataset.to_csv("uncompressed.csv") print(os.path.getsize("uncompressed.csv")) # 1008607 dataset.to_csv("compressed.csv.gz", compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}) print(os.path.getsize("compressed.csv.gz")) # 1008607 ``` ```shell >>> RuntimeWarning: compression has no effect when passing a non-binary object as input. csv_str = batch.to_pandas().to_csv( ``` ```python # dataset is not compressed and no warnings are emitted dataset.to_csv("compressed.csv.gz") print(os.path.getsize("compressed.csv.gz")) # 1008607 # compare with dataset.to_pandas().to_csv("pandas.csv.gz") print(os.path.getsize("pandas.csv.gz")) # 418561 ``` --- I think that this is because behind the scenes `pandas.DataFrame.to_csv` is always called with a buf-like `path_or_buf`, but users that are providing a path-like to `datasets.Dataset.to_csv` are likely not to expect / know that - leading to a mismatch in their understanding of the expected behaviour of the `compression` kwarg. ### Expected behavior The dataset to be saved as a compressed csv file when providing a `compression` kwarg, or when relying on the default `compression='infer'` ### Environment info `datasets == 2.13.1`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6043/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6043/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6042
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6042/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6042/comments
https://api.github.com/repos/huggingface/datasets/issues/6042/events
https://github.com/huggingface/datasets/pull/6042
1,807,516,762
PR_kwDODunzps5VqEyb
6,042
Fix unused DatasetInfosDict code in push_to_hub
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-07-17T11:03:09
2023-07-18T16:17:52
2023-07-18T16:08:42
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6042", "html_url": "https://github.com/huggingface/datasets/pull/6042", "diff_url": "https://github.com/huggingface/datasets/pull/6042.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6042.patch", "merged_at": "2023-07-18T16:08:42" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6042/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6042/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6041
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6041/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6041/comments
https://api.github.com/repos/huggingface/datasets/issues/6041/events
https://github.com/huggingface/datasets/pull/6041
1,807,441,055
PR_kwDODunzps5Vp0GX
6,041
Flatten repository_structure docs on yaml
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-07-17T10:15:10
2023-07-17T10:24:51
2023-07-17T10:16:22
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6041", "html_url": "https://github.com/huggingface/datasets/pull/6041", "diff_url": "https://github.com/huggingface/datasets/pull/6041.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6041.patch", "merged_at": "2023-07-17T10:16:22" }
To have Splits, Configurations and Builder parameters at the same doc level
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6041/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6041/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6040
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6040/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6040/comments
https://api.github.com/repos/huggingface/datasets/issues/6040/events
https://github.com/huggingface/datasets/pull/6040
1,807,410,238
PR_kwDODunzps5VptVf
6,040
Fix legacy_dataset_infos
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-07-17T09:56:21
2023-07-17T10:24:34
2023-07-17T10:16:03
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6040", "html_url": "https://github.com/huggingface/datasets/pull/6040", "diff_url": "https://github.com/huggingface/datasets/pull/6040.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6040.patch", "merged_at": "2023-07-17T10:16:03" }
was causing transformers CI to fail https://circleci.com/gh/huggingface/transformers/855105
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6040/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6040/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6039
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6039/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6039/comments
https://api.github.com/repos/huggingface/datasets/issues/6039/events
https://github.com/huggingface/datasets/issues/6039
1,806,508,451
I_kwDODunzps5rrSGj
6,039
Loading column subset from parquet file produces error since version 2.13
{ "login": "kklemon", "id": 1430243, "node_id": "MDQ6VXNlcjE0MzAyNDM=", "avatar_url": "https://avatars.githubusercontent.com/u/1430243?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kklemon", "html_url": "https://github.com/kklemon", "followers_url": "https://api.github.com/users/kklemon/followers", "following_url": "https://api.github.com/users/kklemon/following{/other_user}", "gists_url": "https://api.github.com/users/kklemon/gists{/gist_id}", "starred_url": "https://api.github.com/users/kklemon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kklemon/subscriptions", "organizations_url": "https://api.github.com/users/kklemon/orgs", "repos_url": "https://api.github.com/users/kklemon/repos", "events_url": "https://api.github.com/users/kklemon/events{/privacy}", "received_events_url": "https://api.github.com/users/kklemon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-07-16T09:13:07
2023-07-24T14:35:04
2023-07-24T14:35:04
NONE
null
null
null
### Describe the bug `load_dataset` allows loading a subset of columns from a parquet file with the `columns` argument. Since version 2.13, this produces the following error: ``` Traceback (most recent call last): File "/usr/lib/python3.10/site-packages/datasets/builder.py", line 1879, in _prepare_split_single for _, table in generator: File "/usr/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 68, in _generate_tables raise ValueError( ValueError: Tried to load parquet data with columns '['sepal_length']' with mismatching features '{'sepal_length': Value(dtype='float64', id=None), 'sepal_width': Value(dtype='float64', id=None), 'petal_length': Value(dtype='float64', id=None), 'petal_width': Value(dtype='float64', id=None), 'species': Value(dtype='string', id=None)}' ``` This seems to occur because `datasets` is checking whether the columns in the schema exactly match the provided list of columns, instead of whether they are a subset. ### Steps to reproduce the bug ```python # Prepare some sample data import pandas as pd iris = pd.read_csv('https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv') iris.to_parquet('iris.parquet') # ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species'] print(iris.columns) # Load data with datasets from datasets import load_dataset # Load full parquet file dataset = load_dataset('parquet', data_files='iris.parquet') # Load column subset; throws error for datasets>=2.13 dataset = load_dataset('parquet', data_files='iris.parquet', columns=['sepal_length']) ``` ### Expected behavior No error should be thrown and the given column subset should be loaded. ### Environment info - `datasets` version: 2.13.0 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35 - Python version: 3.10.9 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6039/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6039/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/6038
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6038/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6038/comments
https://api.github.com/repos/huggingface/datasets/issues/6038/events
https://github.com/huggingface/datasets/issues/6038
1,805,960,244
I_kwDODunzps5rpMQ0
6,038
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 992, in _download_and_prepare if str(split_generator.split_info.name).lower() == "all": AttributeError: 'str' object has no attribute 'split_info'. Did you mean: 'splitlines'?
{ "login": "BaiMeiyingxue", "id": 53547009, "node_id": "MDQ6VXNlcjUzNTQ3MDA5", "avatar_url": "https://avatars.githubusercontent.com/u/53547009?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BaiMeiyingxue", "html_url": "https://github.com/BaiMeiyingxue", "followers_url": "https://api.github.com/users/BaiMeiyingxue/followers", "following_url": "https://api.github.com/users/BaiMeiyingxue/following{/other_user}", "gists_url": "https://api.github.com/users/BaiMeiyingxue/gists{/gist_id}", "starred_url": "https://api.github.com/users/BaiMeiyingxue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BaiMeiyingxue/subscriptions", "organizations_url": "https://api.github.com/users/BaiMeiyingxue/orgs", "repos_url": "https://api.github.com/users/BaiMeiyingxue/repos", "events_url": "https://api.github.com/users/BaiMeiyingxue/events{/privacy}", "received_events_url": "https://api.github.com/users/BaiMeiyingxue/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-07-15T07:58:08
2023-07-24T11:54:15
2023-07-24T11:54:15
NONE
null
null
null
Hi, I use the code below to load local file ``` def _split_generators(self, dl_manager): # TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLS # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files. # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive # urls = _URLS[self.config.name] data_dir = dl_manager.download_and_extract(_URLs) print(data_dir) return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, # These kwargs will be passed to _generate_examples gen_kwargs={ "filepath": os.path.join(data_dir["train"]), "split": "train", }, ), datasets.SplitGenerator( name=datasets.Split.VALIDATION, # These kwargs will be passed to _generate_examples gen_kwargs={ "filepath": os.path.join(data_dir["dev"]), "split": "dev", }, ), ] ``` and error occured ``` Traceback (most recent call last): File "/home/zhizhou/data1/zhanghao/huggingface/FineTuning_Transformer/load_local_dataset.py", line 2, in <module> dataset = load_dataset("./QA_script.py",data_files='/home/zhizhou/.cache/huggingface/datasets/conversatiom_corps/part_file.json') File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/load.py", line 1809, in load_dataset builder_instance.download_and_prepare( File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 909, in download_and_prepare self._download_and_prepare( File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 1670, in _download_and_prepare super()._download_and_prepare( File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 992, in _download_and_prepare if str(split_generator.split_info.name).lower() == "all": AttributeError: 'str' object has no attribute 'split_info'. Did you mean: 'splitlines'? ``` Could you help me?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6038/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6038/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/6037
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6037/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6037/comments
https://api.github.com/repos/huggingface/datasets/issues/6037/events
https://github.com/huggingface/datasets/issues/6037
1,805,887,184
I_kwDODunzps5ro6bQ
6,037
Documentation links to examples are broken
{ "login": "david-waterworth", "id": 5028974, "node_id": "MDQ6VXNlcjUwMjg5NzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4", "gravatar_id": "", "url": "https://api.github.com/users/david-waterworth", "html_url": "https://github.com/david-waterworth", "followers_url": "https://api.github.com/users/david-waterworth/followers", "following_url": "https://api.github.com/users/david-waterworth/following{/other_user}", "gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}", "starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions", "organizations_url": "https://api.github.com/users/david-waterworth/orgs", "repos_url": "https://api.github.com/users/david-waterworth/repos", "events_url": "https://api.github.com/users/david-waterworth/events{/privacy}", "received_events_url": "https://api.github.com/users/david-waterworth/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-07-15T04:54:50
2023-07-17T22:35:14
2023-07-17T15:10:32
NONE
null
null
null
### Describe the bug The links at the bottom of [add_dataset](https://huggingface.co/docs/datasets/v1.2.1/add_dataset.html) to examples of specific datasets are all broken, for example - text classification: [ag_news](https://github.com/huggingface/datasets/blob/master/datasets/ag_news/ag_news.py) (original data are in csv files) ### Steps to reproduce the bug Click on links to examples from latest documentation ### Expected behavior Links should be up to date - it might be more stable to link to https://huggingface.co/datasets/ag_news/blob/main/ag_news.py ### Environment info dataset v1.2.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6037/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6037/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/6036
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6036/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6036/comments
https://api.github.com/repos/huggingface/datasets/issues/6036/events
https://github.com/huggingface/datasets/pull/6036
1,805,138,898
PR_kwDODunzps5ViKc4
6,036
Deprecate search API
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
8
2023-07-14T16:22:09
2023-07-21T19:53:51
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6036", "html_url": "https://github.com/huggingface/datasets/pull/6036", "diff_url": "https://github.com/huggingface/datasets/pull/6036.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6036.patch", "merged_at": null }
The Search API only supports Faiss and ElasticSearch as vector stores, is somewhat difficult to maintain (e.g., it still doesn't support ElasticSeach 8.0, difficult testing, ...), does not have the best design (adds a bunch of methods to the `Dataset` class that are only useful after creating an index), the usage doesn't seem to be significant and is not integrated with the Hub. Since we have no plans/bandwidth to improve it and better alternatives such as `langchain` and `docarray` exist, I think it should be deprecated (and eventually removed). If we decide to deprecate/remove it, the following usage instances need to be addressed: * [Course](https://github.com/huggingface/course/blob/0018bb434204d9750a03592cb0d4e846093218d8/chapters/en/chapter5/6.mdx#L342 ) and [Blog](https://github.com/huggingface/blog/blob/4897c6f73d4492a0955ade503281711d01840e09/image-search-datasets.md?plain=1#L252) - calling the FAISS API directly should be OK in these instances as it's pretty simple to use for basic scenarios. Alternatively, we can use `langchain`, but this adds an extra dependency * [Transformers](https://github.com/huggingface/transformers/blob/50726f9ea7afc6113da617f8f4ca1ab264a5e28a/src/transformers/models/rag/retrieval_rag.py#L183) - we can use the FAISS API directly and store the index as a separate attribute (and instead of building the `wiki_dpr` index each time the dataset is generated, we can generate it once and push it to the Hub repo, and then read it from there cc @huggingface/datasets @LysandreJik for the opinion
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6036/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6036/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6035
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6035/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6035/comments
https://api.github.com/repos/huggingface/datasets/issues/6035/events
https://github.com/huggingface/datasets/pull/6035
1,805,087,687
PR_kwDODunzps5Vh_QR
6,035
Dataset representation
{ "login": "Ganryuu", "id": 63643948, "node_id": "MDQ6VXNlcjYzNjQzOTQ4", "avatar_url": "https://avatars.githubusercontent.com/u/63643948?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ganryuu", "html_url": "https://github.com/Ganryuu", "followers_url": "https://api.github.com/users/Ganryuu/followers", "following_url": "https://api.github.com/users/Ganryuu/following{/other_user}", "gists_url": "https://api.github.com/users/Ganryuu/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ganryuu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ganryuu/subscriptions", "organizations_url": "https://api.github.com/users/Ganryuu/orgs", "repos_url": "https://api.github.com/users/Ganryuu/repos", "events_url": "https://api.github.com/users/Ganryuu/events{/privacy}", "received_events_url": "https://api.github.com/users/Ganryuu/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2023-07-14T15:42:37
2023-07-19T19:41:35
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6035", "html_url": "https://github.com/huggingface/datasets/pull/6035", "diff_url": "https://github.com/huggingface/datasets/pull/6035.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6035.patch", "merged_at": null }
__repr__ and _repr_html_ now both are similar to that of Polars
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6035/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6035/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6034
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6034/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6034/comments
https://api.github.com/repos/huggingface/datasets/issues/6034/events
https://github.com/huggingface/datasets/issues/6034
1,804,501,361
I_kwDODunzps5rjoFx
6,034
load_dataset hangs on WSL
{ "login": "Andy-Zhou2", "id": 20140522, "node_id": "MDQ6VXNlcjIwMTQwNTIy", "avatar_url": "https://avatars.githubusercontent.com/u/20140522?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Andy-Zhou2", "html_url": "https://github.com/Andy-Zhou2", "followers_url": "https://api.github.com/users/Andy-Zhou2/followers", "following_url": "https://api.github.com/users/Andy-Zhou2/following{/other_user}", "gists_url": "https://api.github.com/users/Andy-Zhou2/gists{/gist_id}", "starred_url": "https://api.github.com/users/Andy-Zhou2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Andy-Zhou2/subscriptions", "organizations_url": "https://api.github.com/users/Andy-Zhou2/orgs", "repos_url": "https://api.github.com/users/Andy-Zhou2/repos", "events_url": "https://api.github.com/users/Andy-Zhou2/events{/privacy}", "received_events_url": "https://api.github.com/users/Andy-Zhou2/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-07-14T09:03:10
2023-07-14T14:48:29
2023-07-14T14:48:29
NONE
null
null
null
### Describe the bug load_dataset simply hangs. It happens once every ~5 times, and interestingly hangs for a multiple of 5 minutes (hangs for 5/10/15 minutes). Using the profiler in PyCharm shows that it spends the time at <method 'connect' of '_socket.socket' objects>. However, a local cache is available so I am not sure why socket is needed. ([profiler result](https://ibb.co/0Btbbp8)) It only happens on WSL for me. It works for native Windows and my MacBook. (cache quickly recognized and loaded within a second). ### Steps to reproduce the bug I am using Ubuntu 22.04.2 LTS (GNU/Linux 5.15.90.1-microsoft-standard-WSL2 x86_64) Python 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] on linux >>> import datasets >>> datasets.load_dataset('ai2_arc', 'ARC-Challenge') # hangs for 5/10/15 minutes ### Expected behavior cache quickly recognized and loaded within a second ### Environment info Please let me know if I should provide more environment information.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6034/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6034/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/6033
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6033/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6033/comments
https://api.github.com/repos/huggingface/datasets/issues/6033/events
https://github.com/huggingface/datasets/issues/6033
1,804,482,051
I_kwDODunzps5rjjYD
6,033
`map` function doesn't fully utilize `input_columns`.
{ "login": "kwonmha", "id": 8953934, "node_id": "MDQ6VXNlcjg5NTM5MzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/8953934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kwonmha", "html_url": "https://github.com/kwonmha", "followers_url": "https://api.github.com/users/kwonmha/followers", "following_url": "https://api.github.com/users/kwonmha/following{/other_user}", "gists_url": "https://api.github.com/users/kwonmha/gists{/gist_id}", "starred_url": "https://api.github.com/users/kwonmha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kwonmha/subscriptions", "organizations_url": "https://api.github.com/users/kwonmha/orgs", "repos_url": "https://api.github.com/users/kwonmha/repos", "events_url": "https://api.github.com/users/kwonmha/events{/privacy}", "received_events_url": "https://api.github.com/users/kwonmha/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-07-14T08:49:28
2023-07-14T09:16:04
2023-07-14T09:16:04
NONE
null
null
null
### Describe the bug I wanted to select only some columns of data. And I thought that's why the argument `input_columns` exists. What I expected is like this: If there are ["a", "b", "c", "d"] columns, and if I set `input_columns=["a", "d"]`, the data will have only ["a", "d"] columns. But it doesn't select columns. It preserves existing columns. The main cause is `update` function of `dictionary` type `transformed_batch`. https://github.com/huggingface/datasets/blob/682d21e94ab1e64c11b583de39dc4c93f0101c5a/src/datasets/iterable_dataset.py#L687-L691 `transformed_batch` gets all the columns by `transformed_batch = dict(batch)`. Even `function_args` selects `input_columns`, `update` preserves columns other than `input_columns`. I think it should take a new dictionary with columns in `input_columns` like this: ``` # transformed_batch = dict(batch) # transformed_batch.update(self.function(*function_args, **self.fn_kwargs) # This is what I think correct. transformed_batch = self.function(*function_args, **self.fn_kwargs) ``` Let me know how to use `input_columns`. ### Steps to reproduce the bug Described all above. ### Expected behavior Described all above. ### Environment info datasets: 2.12 python: 3.8
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6033/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6033/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/6032
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6032/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6032/comments
https://api.github.com/repos/huggingface/datasets/issues/6032/events
https://github.com/huggingface/datasets/issues/6032
1,804,358,679
I_kwDODunzps5rjFQX
6,032
DownloadConfig.proxies not work when load_dataset_builder calling HfApi.dataset_info
{ "login": "codingl2k1", "id": 138426806, "node_id": "U_kgDOCEA5tg", "avatar_url": "https://avatars.githubusercontent.com/u/138426806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/codingl2k1", "html_url": "https://github.com/codingl2k1", "followers_url": "https://api.github.com/users/codingl2k1/followers", "following_url": "https://api.github.com/users/codingl2k1/following{/other_user}", "gists_url": "https://api.github.com/users/codingl2k1/gists{/gist_id}", "starred_url": "https://api.github.com/users/codingl2k1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/codingl2k1/subscriptions", "organizations_url": "https://api.github.com/users/codingl2k1/orgs", "repos_url": "https://api.github.com/users/codingl2k1/repos", "events_url": "https://api.github.com/users/codingl2k1/events{/privacy}", "received_events_url": "https://api.github.com/users/codingl2k1/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
2023-07-14T07:22:55
2023-07-17T04:12:45
null
NONE
null
null
null
### Describe the bug ```python download_config = DownloadConfig(proxies={'https': '<my proxy>'}) builder = load_dataset_builder(..., download_config=download_config) ``` But, when getting the dataset_info from HfApi, the http requests not using the proxies. ### Steps to reproduce the bug 1. Setup proxies in DownloadConfig. 2. Call `load_dataset_build` with download_config. 3. Inspect the call stack in HfApi.dataset_info. ![image](https://github.com/huggingface/datasets/assets/138426806/33e538a8-2e22-4e63-b634-343febe5324b) ### Expected behavior DownloadConfig.proxies works for getting dataset_info. ### Environment info https://github.com/huggingface/datasets/commit/406b2212263c0d33f267e35b917f410ff6b3bc00 Python 3.11.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6032/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6032/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6031
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6031/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6031/comments
https://api.github.com/repos/huggingface/datasets/issues/6031/events
https://github.com/huggingface/datasets/issues/6031
1,804,183,858
I_kwDODunzps5riaky
6,031
Argument type for map function changes when using `input_columns` for `IterableDataset`
{ "login": "kwonmha", "id": 8953934, "node_id": "MDQ6VXNlcjg5NTM5MzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/8953934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kwonmha", "html_url": "https://github.com/kwonmha", "followers_url": "https://api.github.com/users/kwonmha/followers", "following_url": "https://api.github.com/users/kwonmha/following{/other_user}", "gists_url": "https://api.github.com/users/kwonmha/gists{/gist_id}", "starred_url": "https://api.github.com/users/kwonmha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kwonmha/subscriptions", "organizations_url": "https://api.github.com/users/kwonmha/orgs", "repos_url": "https://api.github.com/users/kwonmha/repos", "events_url": "https://api.github.com/users/kwonmha/events{/privacy}", "received_events_url": "https://api.github.com/users/kwonmha/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-07-14T05:11:14
2023-07-14T14:44:15
2023-07-14T14:44:15
NONE
null
null
null
### Describe the bug I wrote `tokenize(examples)` function as an argument for `map` function for `IterableDataset`. It process dictionary type `examples` as a parameter. It is used in `train_dataset = train_dataset.map(tokenize, batched=True)` No error is raised. And then, I found some unnecessary keys and values in `examples` so I added `input_columns` argument to `map` function to select keys and values. It gives me an error saying ``` TypeError: tokenize() takes 1 positional argument but 3 were given. ``` The code below matters. https://github.com/huggingface/datasets/blob/406b2212263c0d33f267e35b917f410ff6b3bc00/src/datasets/iterable_dataset.py#L687 For example, `inputs = {"a":1, "b":2, "c":3}`. If `self.input_coluns` is `None`, `inputs` is a dictionary type variable and `function_args` becomes a `list` of a single `dict` variable. `function_args` becomes `[{"a":1, "b":2, "c":3}]` Otherwise, lets say `self.input_columns = ["a", "c"]` `[inputs[col] for col in self.input_columns]` results in `[1, 3]`. I think it should be `[{"a":1, "c":3}]`. I want to ask if the resulting format is intended. Maybe I can modify `tokenize()` to have 2 parameters in this case instead of having 1 dictionary. But this is confusing to me. Or it should be fixed as `[{col:inputs[col] for col in self.input_columns}]` ### Steps to reproduce the bug Run `map` function of `IterableDataset` with `input_columns` argument. ### Expected behavior `function_args` looks better to have same format. I think it should be `[{"a":1, "c":3}]`. ### Environment info dataset version: 2.12 python: 3.8
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6031/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6031/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/6030
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6030/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6030/comments
https://api.github.com/repos/huggingface/datasets/issues/6030/events
https://github.com/huggingface/datasets/pull/6030
1,803,864,744
PR_kwDODunzps5Vd0ZG
6,030
fixed typo in comment
{ "login": "NightMachinery", "id": 36224762, "node_id": "MDQ6VXNlcjM2MjI0NzYy", "avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NightMachinery", "html_url": "https://github.com/NightMachinery", "followers_url": "https://api.github.com/users/NightMachinery/followers", "following_url": "https://api.github.com/users/NightMachinery/following{/other_user}", "gists_url": "https://api.github.com/users/NightMachinery/gists{/gist_id}", "starred_url": "https://api.github.com/users/NightMachinery/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NightMachinery/subscriptions", "organizations_url": "https://api.github.com/users/NightMachinery/orgs", "repos_url": "https://api.github.com/users/NightMachinery/repos", "events_url": "https://api.github.com/users/NightMachinery/events{/privacy}", "received_events_url": "https://api.github.com/users/NightMachinery/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-07-13T22:49:57
2023-07-14T14:21:58
2023-07-14T14:13:38
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6030", "html_url": "https://github.com/huggingface/datasets/pull/6030", "diff_url": "https://github.com/huggingface/datasets/pull/6030.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6030.patch", "merged_at": "2023-07-14T14:13:38" }
This mistake was a bit confusing, so I thought it was worth sending a PR over.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6030/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6030/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6029
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6029/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6029/comments
https://api.github.com/repos/huggingface/datasets/issues/6029/events
https://github.com/huggingface/datasets/pull/6029
1,803,460,046
PR_kwDODunzps5VcbPW
6,029
[docs] Fix link
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-07-13T17:24:12
2023-07-13T17:47:41
2023-07-13T17:38:59
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6029", "html_url": "https://github.com/huggingface/datasets/pull/6029", "diff_url": "https://github.com/huggingface/datasets/pull/6029.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6029.patch", "merged_at": "2023-07-13T17:38:59" }
Fixes link to the builder classes :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6029/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6029/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6028
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6028/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6028/comments
https://api.github.com/repos/huggingface/datasets/issues/6028/events
https://github.com/huggingface/datasets/pull/6028
1,803,294,981
PR_kwDODunzps5Vb3LJ
6,028
Use new hffs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
13
2023-07-13T15:41:44
2023-07-17T17:09:39
2023-07-17T17:01:00
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6028", "html_url": "https://github.com/huggingface/datasets/pull/6028", "diff_url": "https://github.com/huggingface/datasets/pull/6028.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6028.patch", "merged_at": "2023-07-17T17:01:00" }
Thanks to @janineguo 's work in https://github.com/huggingface/datasets/pull/5919 which was needed to support HfFileSystem. Switching to `HfFileSystem` will help implementing optimization in data files resolution ## Implementation details I replaced all the from_hf_repo and from_local_or_remote in data_files.py to only use a new `from_patterns` which works for any fsspec path, including hf:// paths, https:// URLs and local paths. This simplifies the codebase since there is no logic duplication anymore when it comes to data files resolution. I added `_prepare_path_and_storage_options` which returns the right storage_options to use given a path and a `DownloadConfig`. This is the only place where the logic depends on the filesystem type that must be used. I also removed the `get_metadata_data_files_list ` and `get_patterns_and_data_files` functions added recently, since data files resolution is now handled using a common interface. ## New features hf:// paths are now supported in data_files ## Breaking changes DataFilesList and DataFilesDict: - use `str` paths instead of `Union[Path, Url]` - require posix paths for windows paths close https://github.com/huggingface/datasets/issues/6017
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6028/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6028/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6027
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6027/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6027/comments
https://api.github.com/repos/huggingface/datasets/issues/6027/events
https://github.com/huggingface/datasets/pull/6027
1,803,008,486
PR_kwDODunzps5Va4g3
6,027
Delete `task_templates` in `IterableDataset` when they are no longer valid
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-07-13T13:16:17
2023-07-13T14:06:20
2023-07-13T13:57:35
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6027", "html_url": "https://github.com/huggingface/datasets/pull/6027", "diff_url": "https://github.com/huggingface/datasets/pull/6027.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6027.patch", "merged_at": "2023-07-13T13:57:35" }
Fix #6025
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6027/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6027/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6026
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6026/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6026/comments
https://api.github.com/repos/huggingface/datasets/issues/6026/events
https://github.com/huggingface/datasets/pull/6026
1,802,929,222
PR_kwDODunzps5VanI8
6,026
Fix style with ruff 0.0.278
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-07-13T12:34:24
2023-07-13T12:46:26
2023-07-13T12:37:01
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6026", "html_url": "https://github.com/huggingface/datasets/pull/6026", "diff_url": "https://github.com/huggingface/datasets/pull/6026.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6026.patch", "merged_at": "2023-07-13T12:37:01" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6026/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6026/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6025
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6025/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6025/comments
https://api.github.com/repos/huggingface/datasets/issues/6025/events
https://github.com/huggingface/datasets/issues/6025
1,801,852,601
I_kwDODunzps5rZha5
6,025
Using a dataset for a use other than it was intended for.
{ "login": "surya-narayanan", "id": 17240858, "node_id": "MDQ6VXNlcjE3MjQwODU4", "avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4", "gravatar_id": "", "url": "https://api.github.com/users/surya-narayanan", "html_url": "https://github.com/surya-narayanan", "followers_url": "https://api.github.com/users/surya-narayanan/followers", "following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}", "gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}", "starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions", "organizations_url": "https://api.github.com/users/surya-narayanan/orgs", "repos_url": "https://api.github.com/users/surya-narayanan/repos", "events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}", "received_events_url": "https://api.github.com/users/surya-narayanan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-07-12T22:33:17
2023-07-13T13:57:36
2023-07-13T13:57:36
NONE
null
null
null
### Describe the bug Hi, I want to use the rotten tomatoes dataset but for a task other than classification, but when I interleave the dataset, it throws ```'ValueError: Column label is not present in features.'```. It seems that the label_col must be there in the dataset for some reason? Here is the full stacktrace ``` File "/home/suryahari/Vornoi/tryage-handoff-other-datasets.py", line 276, in create_dataloaders dataset = interleave_datasets(dsfold, stopping_strategy="all_exhausted") File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py", line 134, in interleave_datasets return _interleave_iterable_datasets( File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1833, in _interleave_iterable_datasets info = DatasetInfo.from_merge([d.info for d in datasets]) File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 275, in from_merge dataset_infos = [dset_info.copy() for dset_info in dataset_infos if dset_info is not None] File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 275, in <listcomp> dataset_infos = [dset_info.copy() for dset_info in dataset_infos if dset_info is not None] File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 378, in copy return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()}) File "<string>", line 20, in __init__ File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 208, in __post_init__ self.task_templates = [ File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 209, in <listcomp> template.align_with_features(self.features) for template in (self.task_templates) File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/tasks/text_classification.py", line 20, in align_with_features raise ValueError(f"Column {self.label_column} is not present in features.") ValueError: Column label is not present in features. ``` ### Steps to reproduce the bug Delete the column `labels` from the `rotten_tomatoes` dataset. Try to interleave it with other datasets. ### Expected behavior Should let me use the dataset with just the `text` field ### Environment info latest datasets library? I don't think this was an issue in earlier versions.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6025/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6025/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/6024
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6024/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6024/comments
https://api.github.com/repos/huggingface/datasets/issues/6024/events
https://github.com/huggingface/datasets/pull/6024
1,801,708,808
PR_kwDODunzps5VWbGe
6,024
Don't reference self in Spark._validate_cache_dir
{ "login": "maddiedawson", "id": 106995444, "node_id": "U_kgDOBmCe9A", "avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maddiedawson", "html_url": "https://github.com/maddiedawson", "followers_url": "https://api.github.com/users/maddiedawson/followers", "following_url": "https://api.github.com/users/maddiedawson/following{/other_user}", "gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}", "starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions", "organizations_url": "https://api.github.com/users/maddiedawson/orgs", "repos_url": "https://api.github.com/users/maddiedawson/repos", "events_url": "https://api.github.com/users/maddiedawson/events{/privacy}", "received_events_url": "https://api.github.com/users/maddiedawson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-07-12T20:31:16
2023-07-13T16:58:32
2023-07-13T12:37:09
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6024", "html_url": "https://github.com/huggingface/datasets/pull/6024", "diff_url": "https://github.com/huggingface/datasets/pull/6024.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6024.patch", "merged_at": "2023-07-13T12:37:09" }
Fix for https://github.com/huggingface/datasets/issues/5963
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6024/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6024/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6023
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6023/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6023/comments
https://api.github.com/repos/huggingface/datasets/issues/6023/events
https://github.com/huggingface/datasets/pull/6023
1,801,272,420
PR_kwDODunzps5VU7EG
6,023
Fix `ClassLabel` min max check for `None` values
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-07-12T15:46:12
2023-07-12T16:29:26
2023-07-12T16:18:04
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6023", "html_url": "https://github.com/huggingface/datasets/pull/6023", "diff_url": "https://github.com/huggingface/datasets/pull/6023.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6023.patch", "merged_at": "2023-07-12T16:18:04" }
Fix #6022
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6023/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6023/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6022
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6022/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6022/comments
https://api.github.com/repos/huggingface/datasets/issues/6022/events
https://github.com/huggingface/datasets/issues/6022
1,800,092,589
I_kwDODunzps5rSzut
6,022
Batch map raises TypeError: '>=' not supported between instances of 'NoneType' and 'int'
{ "login": "codingl2k1", "id": 138426806, "node_id": "U_kgDOCEA5tg", "avatar_url": "https://avatars.githubusercontent.com/u/138426806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/codingl2k1", "html_url": "https://github.com/codingl2k1", "followers_url": "https://api.github.com/users/codingl2k1/followers", "following_url": "https://api.github.com/users/codingl2k1/following{/other_user}", "gists_url": "https://api.github.com/users/codingl2k1/gists{/gist_id}", "starred_url": "https://api.github.com/users/codingl2k1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/codingl2k1/subscriptions", "organizations_url": "https://api.github.com/users/codingl2k1/orgs", "repos_url": "https://api.github.com/users/codingl2k1/repos", "events_url": "https://api.github.com/users/codingl2k1/events{/privacy}", "received_events_url": "https://api.github.com/users/codingl2k1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-07-12T03:20:17
2023-07-12T16:18:06
2023-07-12T16:18:05
NONE
null
null
null
### Describe the bug When mapping some datasets with `batched=True`, datasets may raise an exeception: ```python Traceback (most recent call last): File "/Users/codingl2k1/Work/datasets/venv/lib/python3.11/site-packages/multiprocess/pool.py", line 125, in worker result = (True, func(*args, **kwds)) ^^^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/utils/py_utils.py", line 1328, in _write_generator_to_queue for i, result in enumerate(func(**kwargs)): File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_dataset.py", line 3483, in _map_single writer.write_batch(batch) File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_writer.py", line 549, in write_batch array = cast_array_to_feature(col_values, col_type) if col_type is not None else col_values ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/table.py", line 1831, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/table.py", line 1831, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/table.py", line 2063, in cast_array_to_feature return feature.cast_storage(array) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/features/features.py", line 1098, in cast_storage if min_max["max"] >= self.num_classes: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: '>=' not supported between instances of 'NoneType' and 'int' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/codingl2k1/Work/datasets/t1.py", line 33, in <module> ds = ds.map(transforms, num_proc=14, batched=True, batch_size=5) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/dataset_dict.py", line 850, in map { File "/Users/codingl2k1/Work/datasets/src/datasets/dataset_dict.py", line 851, in <dictcomp> k: dataset.map( ^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_dataset.py", line 577, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_dataset.py", line 542, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_dataset.py", line 3179, in map for rank, done, content in iflatmap_unordered( File "/Users/codingl2k1/Work/datasets/src/datasets/utils/py_utils.py", line 1368, in iflatmap_unordered [async_result.get(timeout=0.05) for async_result in async_results] File "/Users/codingl2k1/Work/datasets/src/datasets/utils/py_utils.py", line 1368, in <listcomp> [async_result.get(timeout=0.05) for async_result in async_results] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/venv/lib/python3.11/site-packages/multiprocess/pool.py", line 774, in get raise self._value TypeError: '>=' not supported between instances of 'NoneType' and 'int' ``` ### Steps to reproduce the bug 1. Checkout the latest main of datasets. 2. Run the code: ```python from datasets import load_dataset def transforms(examples): # examples["pixel_values"] = [image.convert("RGB").resize((100, 100)) for image in examples["image"]] return examples ds = load_dataset("scene_parse_150") ds = ds.map(transforms, num_proc=14, batched=True, batch_size=5) print(ds) ``` ### Expected behavior map without exception. ### Environment info Datasets: https://github.com/huggingface/datasets/commit/b8067c0262073891180869f700ebef5ac3dc5cce Python: 3.11.4 System: Macos
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6022/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6022/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/6021
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6021/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6021/comments
https://api.github.com/repos/huggingface/datasets/issues/6021/events
https://github.com/huggingface/datasets/pull/6021
1,799,785,904
PR_kwDODunzps5VP11Q
6,021
[docs] Update return statement of index search
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-07-11T21:33:32
2023-07-12T17:13:02
2023-07-12T17:03:00
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6021", "html_url": "https://github.com/huggingface/datasets/pull/6021", "diff_url": "https://github.com/huggingface/datasets/pull/6021.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6021.patch", "merged_at": "2023-07-12T17:03:00" }
Clarifies in the return statement of the docstring that the retrieval score is `IndexFlatL2` by default (see [PR](https://github.com/huggingface/transformers/issues/24739) and internal Slack [convo](https://huggingface.slack.com/archives/C01229B19EX/p1689105179711689)), and fixes the formatting because multiple return values are not supported.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6021/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6021/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6020
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6020/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6020/comments
https://api.github.com/repos/huggingface/datasets/issues/6020/events
https://github.com/huggingface/datasets/issues/6020
1,799,720,536
I_kwDODunzps5rRY5Y
6,020
Inconsistent "The features can't be aligned" error when combining map, multiprocessing, and variable length outputs
{ "login": "kheyer", "id": 38166299, "node_id": "MDQ6VXNlcjM4MTY2Mjk5", "avatar_url": "https://avatars.githubusercontent.com/u/38166299?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kheyer", "html_url": "https://github.com/kheyer", "followers_url": "https://api.github.com/users/kheyer/followers", "following_url": "https://api.github.com/users/kheyer/following{/other_user}", "gists_url": "https://api.github.com/users/kheyer/gists{/gist_id}", "starred_url": "https://api.github.com/users/kheyer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kheyer/subscriptions", "organizations_url": "https://api.github.com/users/kheyer/orgs", "repos_url": "https://api.github.com/users/kheyer/repos", "events_url": "https://api.github.com/users/kheyer/events{/privacy}", "received_events_url": "https://api.github.com/users/kheyer/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2023-07-11T20:40:38
2023-07-12T15:58:24
null
NONE
null
null
null
### Describe the bug I'm using a dataset with map and multiprocessing to run a function that returned a variable length list of outputs. This output list may be empty. Normally this is handled fine, but there is an edge case that crops up when using multiprocessing. In some cases, an empty list result ends up in a dataset shard consisting of a single item. This results in a `The features can't be aligned` error that is difficult to debug because it depends on the number of processes/shards used. I've reproduced a minimal example below. My current workaround is to fill empty results with a dummy value that I filter after, but this was a weird error that took a while to track down. ### Steps to reproduce the bug ```python import datasets dataset = datasets.Dataset.from_list([{'idx':i} for i in range(60)]) def test_func(row, idx): if idx==58: return {'output': []} else: return {'output' : [{'test':1}, {'test':2}]} # this works fine test1 = dataset.map(lambda row, idx: test_func(row, idx), with_indices=True, num_proc=4) # this fails test2 = dataset.map(lambda row, idx: test_func(row, idx), with_indices=True, num_proc=32) >ValueError: The features can't be aligned because the key output of features {'idx': Value(dtype='int64', id=None), 'output': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None)} has unexpected type - Sequence(feature=Value(dtype='null', id=None), length=-1, id=None) (expected either [{'test': Value(dtype='int64', id=None)}] or Value("null"). ``` The error occurs during the check ```python _check_if_features_can_be_aligned([dset.features for dset in dsets]) ``` When the multiprocessing splitting lines up just right with the empty return value, one of the `dset` in `dsets` will have a single item with an empty list value, causing the error. ### Expected behavior Expected behavior is the result would be the same regardless of the `num_proc` value used. ### Environment info Datasets version 2.11.0 Python 3.9.16
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6020/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6020/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6019
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6019/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6019/comments
https://api.github.com/repos/huggingface/datasets/issues/6019/events
https://github.com/huggingface/datasets/pull/6019
1,799,532,822
PR_kwDODunzps5VPAlD
6,019
Improve logging
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
13
2023-07-11T18:30:23
2023-07-12T19:34:14
2023-07-12T17:19:28
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6019", "html_url": "https://github.com/huggingface/datasets/pull/6019", "diff_url": "https://github.com/huggingface/datasets/pull/6019.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6019.patch", "merged_at": "2023-07-12T17:19:28" }
Adds the StreamHandler (as `hfh` and `transformers` do) to the library's logger to log INFO messages and logs the messages about "loading a cached result" (and some other warnings) as INFO (Also removes the `leave=False` arg in the progress bars to be consistent with `hfh` and `transformers` - progress bars serve as an indicator that a result is not cached, so it makes more sense not to delete them) Fix #2832, fix https://github.com/huggingface/datasets/issues/1948, fix https://github.com/huggingface/datasets/issues/5444
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6019/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6019/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6018
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6018/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6018/comments
https://api.github.com/repos/huggingface/datasets/issues/6018/events
https://github.com/huggingface/datasets/pull/6018
1,799,411,999
PR_kwDODunzps5VOmKY
6,018
test1
{ "login": "ognjenovicj", "id": 139256323, "node_id": "U_kgDOCEziAw", "avatar_url": "https://avatars.githubusercontent.com/u/139256323?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ognjenovicj", "html_url": "https://github.com/ognjenovicj", "followers_url": "https://api.github.com/users/ognjenovicj/followers", "following_url": "https://api.github.com/users/ognjenovicj/following{/other_user}", "gists_url": "https://api.github.com/users/ognjenovicj/gists{/gist_id}", "starred_url": "https://api.github.com/users/ognjenovicj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ognjenovicj/subscriptions", "organizations_url": "https://api.github.com/users/ognjenovicj/orgs", "repos_url": "https://api.github.com/users/ognjenovicj/repos", "events_url": "https://api.github.com/users/ognjenovicj/events{/privacy}", "received_events_url": "https://api.github.com/users/ognjenovicj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-07-11T17:25:49
2023-07-20T10:11:41
2023-07-20T10:11:41
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6018", "html_url": "https://github.com/huggingface/datasets/pull/6018", "diff_url": "https://github.com/huggingface/datasets/pull/6018.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6018.patch", "merged_at": null }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6018/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6018/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6017
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6017/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6017/comments
https://api.github.com/repos/huggingface/datasets/issues/6017/events
https://github.com/huggingface/datasets/issues/6017
1,799,309,132
I_kwDODunzps5rP0dM
6,017
Switch to huggingface_hub's HfFileSystem
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
0
2023-07-11T16:24:40
2023-07-17T17:01:01
2023-07-17T17:01:01
MEMBER
null
null
null
instead of the current datasets.filesystems.hffilesystem.HfFileSystem which can be slow in some cases related to https://github.com/huggingface/datasets/issues/5846 and https://github.com/huggingface/datasets/pull/5919
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6017/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6017/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/6016
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6016/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6016/comments
https://api.github.com/repos/huggingface/datasets/issues/6016/events
https://github.com/huggingface/datasets/pull/6016
1,798,968,033
PR_kwDODunzps5VNEvn
6,016
Dataset string representation enhancement
{ "login": "Ganryuu", "id": 63643948, "node_id": "MDQ6VXNlcjYzNjQzOTQ4", "avatar_url": "https://avatars.githubusercontent.com/u/63643948?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ganryuu", "html_url": "https://github.com/Ganryuu", "followers_url": "https://api.github.com/users/Ganryuu/followers", "following_url": "https://api.github.com/users/Ganryuu/following{/other_user}", "gists_url": "https://api.github.com/users/Ganryuu/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ganryuu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ganryuu/subscriptions", "organizations_url": "https://api.github.com/users/Ganryuu/orgs", "repos_url": "https://api.github.com/users/Ganryuu/repos", "events_url": "https://api.github.com/users/Ganryuu/events{/privacy}", "received_events_url": "https://api.github.com/users/Ganryuu/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
2023-07-11T13:38:25
2023-07-16T10:26:18
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6016", "html_url": "https://github.com/huggingface/datasets/pull/6016", "diff_url": "https://github.com/huggingface/datasets/pull/6016.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6016.patch", "merged_at": null }
my attempt at #6010 not sure if this is the right way to go about it, I will wait for your feedback
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6016/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6016/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6015
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6015/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6015/comments
https://api.github.com/repos/huggingface/datasets/issues/6015/events
https://github.com/huggingface/datasets/pull/6015
1,798,807,893
PR_kwDODunzps5VMhgB
6,015
Add metadata ui screenshot in docs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-07-11T12:16:29
2023-07-11T16:07:28
2023-07-11T15:56:46
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6015", "html_url": "https://github.com/huggingface/datasets/pull/6015", "diff_url": "https://github.com/huggingface/datasets/pull/6015.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6015.patch", "merged_at": "2023-07-11T15:56:46" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6015/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6015/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6014
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6014/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6014/comments
https://api.github.com/repos/huggingface/datasets/issues/6014/events
https://github.com/huggingface/datasets/issues/6014
1,798,213,816
I_kwDODunzps5rLpC4
6,014
Request to Share/Update Dataset Viewer Code
{ "login": "lilyorlilypad", "id": 105081034, "node_id": "U_kgDOBkNoyg", "avatar_url": "https://avatars.githubusercontent.com/u/105081034?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lilyorlilypad", "html_url": "https://github.com/lilyorlilypad", "followers_url": "https://api.github.com/users/lilyorlilypad/followers", "following_url": "https://api.github.com/users/lilyorlilypad/following{/other_user}", "gists_url": "https://api.github.com/users/lilyorlilypad/gists{/gist_id}", "starred_url": "https://api.github.com/users/lilyorlilypad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lilyorlilypad/subscriptions", "organizations_url": "https://api.github.com/users/lilyorlilypad/orgs", "repos_url": "https://api.github.com/users/lilyorlilypad/repos", "events_url": "https://api.github.com/users/lilyorlilypad/events{/privacy}", "received_events_url": "https://api.github.com/users/lilyorlilypad/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
6
2023-07-11T06:36:09
2023-07-12T14:18:49
null
NONE
null
null
null
Overview: The repository (huggingface/datasets-viewer) was recently archived and when I tried to run the code, there was the error message "AttributeError: module 'datasets.load' has no attribute 'prepare_module'". I could not resolve the issue myself due to lack of documentation of that attribute. Request: I kindly request the sharing of the code responsible for the dataset preview functionality or help with resolving the error. The dataset viewer on the Hugging Face website is incredibly useful since it is compatible with different types of inputs. It allows users to find datasets that meet their needs more efficiently. If needed, I am willing to contribute to the project by testing, documenting, and providing feedback on the dataset viewer code. Thank you for considering this request, and I look forward to your response.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6014/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6014/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6013
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6013/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6013/comments
https://api.github.com/repos/huggingface/datasets/issues/6013/events
https://github.com/huggingface/datasets/issues/6013
1,796,083,437
I_kwDODunzps5rDg7t
6,013
[FR] `map` should reuse unchanged columns from the previous dataset to avoid disk usage
{ "login": "NightMachinery", "id": 36224762, "node_id": "MDQ6VXNlcjM2MjI0NzYy", "avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NightMachinery", "html_url": "https://github.com/NightMachinery", "followers_url": "https://api.github.com/users/NightMachinery/followers", "following_url": "https://api.github.com/users/NightMachinery/following{/other_user}", "gists_url": "https://api.github.com/users/NightMachinery/gists{/gist_id}", "starred_url": "https://api.github.com/users/NightMachinery/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NightMachinery/subscriptions", "organizations_url": "https://api.github.com/users/NightMachinery/orgs", "repos_url": "https://api.github.com/users/NightMachinery/repos", "events_url": "https://api.github.com/users/NightMachinery/events{/privacy}", "received_events_url": "https://api.github.com/users/NightMachinery/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 3761482852, "node_id": "LA_kwDODunzps7gM6xk", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue", "name": "good second issue", "color": "BDE59C", "default": false, "description": "Issues a bit more difficult than \"Good First\" issues" } ]
open
false
null
[]
null
1
2023-07-10T06:42:20
2023-07-10T15:37:52
null
CONTRIBUTOR
null
null
null
### Feature request Currently adding a new column with `map` will cause all the data in the dataset to be duplicated and stored/cached on the disk again. It should reuse unchanged columns. ### Motivation This allows having datasets with different columns but sharing some basic columns. Currently, these datasets would become too expensive to store and one would need some kind of on-the-fly join; which also doesn't seem implemented. ### Your contribution _
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6013/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6012
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6012/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6012/comments
https://api.github.com/repos/huggingface/datasets/issues/6012/events
https://github.com/huggingface/datasets/issues/6012
1,795,575,432
I_kwDODunzps5rBk6I
6,012
[FR] Transform Chaining, Lazy Mapping
{ "login": "NightMachinery", "id": 36224762, "node_id": "MDQ6VXNlcjM2MjI0NzYy", "avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NightMachinery", "html_url": "https://github.com/NightMachinery", "followers_url": "https://api.github.com/users/NightMachinery/followers", "following_url": "https://api.github.com/users/NightMachinery/following{/other_user}", "gists_url": "https://api.github.com/users/NightMachinery/gists{/gist_id}", "starred_url": "https://api.github.com/users/NightMachinery/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NightMachinery/subscriptions", "organizations_url": "https://api.github.com/users/NightMachinery/orgs", "repos_url": "https://api.github.com/users/NightMachinery/repos", "events_url": "https://api.github.com/users/NightMachinery/events{/privacy}", "received_events_url": "https://api.github.com/users/NightMachinery/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
6
2023-07-09T21:40:21
2023-07-14T13:12:40
null
CONTRIBUTOR
null
null
null
### Feature request Currently using a `map` call processes and duplicates the whole dataset, which takes both time and disk space. The solution is to allow lazy mapping, which is essentially a saved chain of transforms that are applied on the fly whenever a slice of the dataset is requested. The API should look like `map`, as `set_transform` changes the current dataset while `map` returns another dataset. ### Motivation Lazy processing allows lower disk usage and faster experimentation. ### Your contribution _
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6012/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6012/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6011
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6011/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6011/comments
https://api.github.com/repos/huggingface/datasets/issues/6011/events
https://github.com/huggingface/datasets/issues/6011
1,795,296,568
I_kwDODunzps5rAg04
6,011
Documentation: wiki_dpr Dataset has no metric_type for Faiss Index
{ "login": "YichiRockyZhang", "id": 29335344, "node_id": "MDQ6VXNlcjI5MzM1MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29335344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YichiRockyZhang", "html_url": "https://github.com/YichiRockyZhang", "followers_url": "https://api.github.com/users/YichiRockyZhang/followers", "following_url": "https://api.github.com/users/YichiRockyZhang/following{/other_user}", "gists_url": "https://api.github.com/users/YichiRockyZhang/gists{/gist_id}", "starred_url": "https://api.github.com/users/YichiRockyZhang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YichiRockyZhang/subscriptions", "organizations_url": "https://api.github.com/users/YichiRockyZhang/orgs", "repos_url": "https://api.github.com/users/YichiRockyZhang/repos", "events_url": "https://api.github.com/users/YichiRockyZhang/events{/privacy}", "received_events_url": "https://api.github.com/users/YichiRockyZhang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-07-09T08:30:19
2023-07-11T03:02:36
2023-07-11T03:02:36
NONE
null
null
null
### Describe the bug After loading `wiki_dpr` using: ```py ds = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train') print(ds.get_index("embeddings").metric_type) # prints nothing because the value is None ``` the index does not have a defined `metric_type`. This is an issue because I do not know how the `scores` are being computed for `get_nearest_examples()`. ### Steps to reproduce the bug System: Python 3.9.16, Transformers 4.30.2, WSL After loading `wiki_dpr` using: ```py ds = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train') print(ds.get_index("embeddings").metric_type) # prints nothing because the value is None ``` the index does not have a defined `metric_type`. This is an issue because I do not know how the `scores` are being computed for `get_nearest_examples()`. ```py from transformers import DPRQuestionEncoder, DPRContextEncoder, DPRQuestionEncoderTokenizer, DPRContextEncoderTokenizer tokenizer = DPRQuestionEncoderTokenizer.from_pretrained("facebook/dpr-question_encoder-multiset-base") encoder = DPRQuestionEncoder.from_pretrained("facebook/dpr-question_encoder-multiset-base") def encode_question(query, tokenizer=tokenizer, encoder=encoder): inputs = tokenizer(query, return_tensors='pt') question_embedding = encoder(**inputs)[0].detach().numpy() return question_embedding def get_knn(query, k=5, tokenizer=tokenizer, encoder=encoder, verbose=False): enc_question = encode_question(query, tokenizer, encoder) topk_results = ds.get_nearest_examples(index_name='embeddings', query=enc_question, k=k) a = torch.tensor(enc_question[0]).reshape(768) b = torch.tensor(topk_results.examples['embeddings'][0]) print(a.shape, b.shape) print(torch.dot(a, b)) print((a-b).pow(2).sum()) return topk_results ``` The [FAISS documentation](https://github.com/facebookresearch/faiss/wiki/MetricType-and-distances) suggests the metric is usually L2 distance (without the square root) or the inner product. I compute both for the sample query: ```py query = """ it catapulted into popular culture along with a line of action figures and other toys by Bandai.[2] By 2001, the media franchise had generated over $6 billion in toy sales. Despite initial criticism that its action violence targeted child audiences, the franchise has been commercially successful.""" get_knn(query,k=5) ``` Here, I get dot product of 80.6020 and L2 distance of 77.6616 and ```py NearestExamplesResults(scores=array([76.20431 , 75.312416, 74.945404, 74.866394, 74.68506 ], dtype=float32), examples={'id': ['3081096', '2004811', '8908258', '9594124', '286575'], 'text': ['actors, resulting in the "Power Rangers" franchise which has continued since then into sequel TV series (with "Power Rangers Beast Morphers" set to premiere in 2019), comic books, video games, and three feature films, with a further cinematic universe planned. Following from the success of "Power Rangers", Saban acquired the rights to more of Toei\'s library, creating "VR Troopers" and "Big Bad Beetleborgs" from several Metal Hero Series shows and "Masked Rider" from Kamen Rider Series footage. DIC Entertainment joined this boom by acquiring the rights to "Gridman the Hyper Agent" and turning it into "Superhuman Samurai Syber-Squad". In 2002,', ``` Doing `k=1` indicates the higher the outputted number, the better the match, so the metric should not be L2 distance. However, my manually computed inner product (80.6) has a discrepancy with the reported (76.2). Perhaps, this has to do with me using the `compressed` embeddings? ### Expected behavior ```py ds = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train') print(ds.get_index("embeddings").metric_type) # METRIC_INNER_PRODUCT ``` ### Environment info - `datasets` version: 2.12.0 - Platform: Linux-4.18.0-477.13.1.el8_8.x86_64-x86_64-with-glibc2.28 - Python version: 3.9.16 - Huggingface_hub version: 0.14.1 - PyArrow version: 12.0.0 - Pandas version: 2.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6011/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6011/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/6010
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6010/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6010/comments
https://api.github.com/repos/huggingface/datasets/issues/6010/events
https://github.com/huggingface/datasets/issues/6010
1,793,838,152
I_kwDODunzps5q68xI
6,010
Improve `Dataset`'s string representation
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
2
2023-07-07T16:38:03
2023-07-16T13:00:18
null
CONTRIBUTOR
null
null
null
Currently, `Dataset.__repr__` outputs a dataset's column names and the number of rows. We could improve it by printing its features and the first few rows. We should also implement `_repr_html_` to have a rich HTML representation in notebooks/Streamlit.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6010/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6010/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6009
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6009/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6009/comments
https://api.github.com/repos/huggingface/datasets/issues/6009/events
https://github.com/huggingface/datasets/pull/6009
1,792,059,808
PR_kwDODunzps5U1mus
6,009
Fix cast for dictionaries with no keys
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-07-06T18:48:14
2023-07-07T14:13:00
2023-07-07T14:01:13
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6009", "html_url": "https://github.com/huggingface/datasets/pull/6009", "diff_url": "https://github.com/huggingface/datasets/pull/6009.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6009.patch", "merged_at": "2023-07-07T14:01:13" }
Fix #5677
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6009/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6009/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6008
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6008/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6008/comments
https://api.github.com/repos/huggingface/datasets/issues/6008/events
https://github.com/huggingface/datasets/issues/6008
1,789,869,344
I_kwDODunzps5qrz0g
6,008
Dataset.from_generator consistently freezes at ~1000 rows
{ "login": "andreemic", "id": 27695722, "node_id": "MDQ6VXNlcjI3Njk1NzIy", "avatar_url": "https://avatars.githubusercontent.com/u/27695722?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andreemic", "html_url": "https://github.com/andreemic", "followers_url": "https://api.github.com/users/andreemic/followers", "following_url": "https://api.github.com/users/andreemic/following{/other_user}", "gists_url": "https://api.github.com/users/andreemic/gists{/gist_id}", "starred_url": "https://api.github.com/users/andreemic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andreemic/subscriptions", "organizations_url": "https://api.github.com/users/andreemic/orgs", "repos_url": "https://api.github.com/users/andreemic/repos", "events_url": "https://api.github.com/users/andreemic/events{/privacy}", "received_events_url": "https://api.github.com/users/andreemic/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-07-05T16:06:48
2023-07-10T13:46:39
2023-07-10T13:46:39
NONE
null
null
null
### Describe the bug Whenever I try to create a dataset which contains images using `Dataset.from_generator`, it freezes around 996 rows. I suppose it has something to do with memory consumption, but there's more memory available. I Somehow it worked a few times but mostly this makes the datasets library much more cumbersome to work with because generators are the easiest way to turn an existing dataset into a Hugging Face dataset. I've let it run in the frozen state for way longer than it can possibly take to load the actual dataset. Let me know if you have ideas how to resolve it! ### Steps to reproduce the bug ```python from datasets import Dataset import numpy as np def gen(): for row in range(10000): yield {"i": np.random.rand(512, 512, 3)} Dataset.from_generator(gen) # -> 90% of the time gets stuck around 1000 rows ``` ### Expected behavior Should continue and go through all the examples yielded by the generator, or at least throw an error or somehow communicate what's going on. ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 12.0.1 - Pandas version: 1.5.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6008/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6008/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/6007
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6007/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6007/comments
https://api.github.com/repos/huggingface/datasets/issues/6007/events
https://github.com/huggingface/datasets/issues/6007
1,789,782,693
I_kwDODunzps5qreql
6,007
Get an error "OverflowError: Python int too large to convert to C long" when loading a large dataset
{ "login": "silverriver", "id": 2529049, "node_id": "MDQ6VXNlcjI1MjkwNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4", "gravatar_id": "", "url": "https://api.github.com/users/silverriver", "html_url": "https://github.com/silverriver", "followers_url": "https://api.github.com/users/silverriver/followers", "following_url": "https://api.github.com/users/silverriver/following{/other_user}", "gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}", "starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/silverriver/subscriptions", "organizations_url": "https://api.github.com/users/silverriver/orgs", "repos_url": "https://api.github.com/users/silverriver/repos", "events_url": "https://api.github.com/users/silverriver/events{/privacy}", "received_events_url": "https://api.github.com/users/silverriver/received_events", "type": "User", "site_admin": false }
[ { "id": 5705560427, "node_id": "LA_kwDODunzps8AAAABVBPxaw", "url": "https://api.github.com/repos/huggingface/datasets/labels/arrow", "name": "arrow", "color": "c2e0c6", "default": false, "description": "Related to Apache Arrow" } ]
open
false
null
[]
null
7
2023-07-05T15:16:50
2023-07-10T19:11:17
null
CONTRIBUTOR
null
null
null
### Describe the bug When load a large dataset with the following code ```python from datasets import load_dataset dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train') ``` We encountered the error: "OverflowError: Python int too large to convert to C long" The error look something like: ``` OverflowError: Python int too large to convert to C long During handling of the above exception, another exception occurred: OverflowError Traceback (most recent call last) <ipython-input-7-0ed8700e662d> in <module> ----> 1 dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train', cache_dir='/sfs/MNBVC/.cache/') /sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1749 ignore_verifications=ignore_verifications, 1750 try_from_hf_gcs=try_from_hf_gcs, -> 1751 use_auth_token=use_auth_token, 1752 ) 1753 /sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 703 if not downloaded_from_gcs: 704 self._download_and_prepare( --> 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 706 ) 707 # Sync info /sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos) 1225 1226 def _download_and_prepare(self, dl_manager, verify_infos): -> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) 1228 1229 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable: /sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 791 try: 792 # Prepare split will record examples associated to the split --> 793 self._prepare_split(split_generator, **prepare_split_kwargs) 794 except OSError as e: 795 raise OSError( /sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys) 1219 writer.write(example, key) 1220 finally: -> 1221 num_examples, num_bytes = writer.finalize() 1222 1223 split_generator.split_info.num_examples = num_examples /sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in finalize(self, close_stream) 536 # Re-intializing to empty list for next batch 537 self.hkey_record = [] --> 538 self.write_examples_on_file() 539 if self.pa_writer is None: 540 if self.schema: /sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in write_examples_on_file(self) 407 # Since current_examples contains (example, key) tuples 408 batch_examples[col] = [row[0][col] for row in self.current_examples] --> 409 self.write_batch(batch_examples=batch_examples) 410 self.current_examples = [] 411 /sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size) 506 col_try_type = try_features[col] if try_features is not None and col in try_features else None 507 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col) --> 508 arrays.append(pa.array(typed_sequence)) 509 inferred_features[col] = typed_sequence.get_inferred_type() 510 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema /sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib.array() /sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol() /sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type) 180 else: 181 trying_cast_to_python_objects = True --> 182 out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True)) 183 # use smaller integer precisions if possible 184 if self.trying_int_optimization: /sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib.array() /sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array() /sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() OverflowError: Python int too large to convert to C long ``` However, that dataset can be loaded in a streaming manner: ```python from datasets import load_dataset dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train', streaming=True) for i in dataset: pass # it work well ``` Another issue is reported in our dataset hub: https://huggingface.co/datasets/liwu/MNBVC/discussions/2 ### Steps to reproduce the bug from datasets import load_dataset dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train') ### Expected behavior the dataset can be safely loaded ### Environment info - `datasets` version: 2.4.0 - Platform: Linux-3.10.0-1160.an7.x86_64-x86_64-with-centos-7.9 - Python version: 3.6.8 - PyArrow version: 6.0.1 - Pandas version: 1.1.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6007/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6007/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6006
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6006/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6006/comments
https://api.github.com/repos/huggingface/datasets/issues/6006/events
https://github.com/huggingface/datasets/issues/6006
1,788,855,582
I_kwDODunzps5qn8Ue
6,006
NotADirectoryError when loading gigawords
{ "login": "xipq", "id": 115634163, "node_id": "U_kgDOBuRv8w", "avatar_url": "https://avatars.githubusercontent.com/u/115634163?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xipq", "html_url": "https://github.com/xipq", "followers_url": "https://api.github.com/users/xipq/followers", "following_url": "https://api.github.com/users/xipq/following{/other_user}", "gists_url": "https://api.github.com/users/xipq/gists{/gist_id}", "starred_url": "https://api.github.com/users/xipq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xipq/subscriptions", "organizations_url": "https://api.github.com/users/xipq/orgs", "repos_url": "https://api.github.com/users/xipq/repos", "events_url": "https://api.github.com/users/xipq/events{/privacy}", "received_events_url": "https://api.github.com/users/xipq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-07-05T06:23:41
2023-07-05T06:31:02
2023-07-05T06:31:01
NONE
null
null
null
### Describe the bug got `NotADirectoryError` whtn loading gigawords dataset ### Steps to reproduce the bug When running ``` import datasets datasets.load_dataset('gigaword') ``` Got the following exception: ```bash Traceback (most recent call last): [0/1862] File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1629, in _prepare_split_single for key, record in generator: File "/home/x/.cache/huggingface/modules/datasets_modules/datasets/gigaword/ea83a8b819190acac5f2dae011fad51dccf269a0604ec5dd24795b 64efb424b6/gigaword.py", line 115, in _generate_examples with open(src_path, encoding="utf-8") as f_d, open(tgt_path, encoding="utf-8") as f_s: File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/streaming.py", line 71, in wrapper return function(*args, use_auth_token=use_auth_token, **kwargs) File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/download/streaming_download_manager.py", line 493, in xope n return open(main_hop, mode, *args, **kwargs) NotADirectoryError: [Errno 20] Not a directory: '/home/x/.cache/huggingface/datasets/downloads/6da52431bb5124d90cf51a0187d2dbee9046e 89780c4be7599794a4f559048ec/org_data/train.src.txt' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "gigaword.py", line 38, in <module> main() File "gigaword.py", line 35, in main train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/") File "/home/x/MICL/preprocess/fewshot_gym_dataset.py", line 199, in generate_k_shot_data dataset = self.load_dataset() File "gigaword.py", line 29, in load_dataset return datasets.load_dataset('gigaword') File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/load.py", line 1809, in load_dataset builder_instance.download_and_prepare( File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 909, in download_and_prepare self._download_and_prepare( File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1670, in _download_and_prepare super()._download_and_prepare( File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1004, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1508, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1665, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Expected behavior Download and process the dataset successfully ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-5.0.0-1032-azure-x86_64-with-glibc2.10 - Python version: 3.8.0 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6006/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6006/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/6005
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6005/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6005/comments
https://api.github.com/repos/huggingface/datasets/issues/6005/events
https://github.com/huggingface/datasets/pull/6005
1,788,103,576
PR_kwDODunzps5UoJ91
6,005
Drop Python 3.7 support
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
7
2023-07-04T15:02:37
2023-07-06T15:32:41
2023-07-06T15:22:43
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6005", "html_url": "https://github.com/huggingface/datasets/pull/6005", "diff_url": "https://github.com/huggingface/datasets/pull/6005.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6005.patch", "merged_at": "2023-07-06T15:22:43" }
`hfh` and `transformers` have dropped Python 3.7 support, so we should do the same :). (Based on the stats, it seems less than 10% of the users use `datasets` with Python 3.7)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6005/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6004
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6004/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6004/comments
https://api.github.com/repos/huggingface/datasets/issues/6004/events
https://github.com/huggingface/datasets/pull/6004
1,786,636,368
PR_kwDODunzps5UjN2h
6,004
Misc improvements
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-07-03T18:29:14
2023-07-06T17:04:11
2023-07-06T16:55:25
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6004", "html_url": "https://github.com/huggingface/datasets/pull/6004", "diff_url": "https://github.com/huggingface/datasets/pull/6004.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6004.patch", "merged_at": "2023-07-06T16:55:25" }
Contains the following improvements: * fixes a "share dataset" link in README and modifies the "hosting" part in the disclaimer section * updates `Makefile` to also run the style checks on `utils` and `setup.py` * deletes a test for GH-hosted datasets (no longer supported) * deletes `convert_dataset.sh` (outdated) * aligns `utils/release.py` with `transformers` (the current version is outdated)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6004/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6004/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6003
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6003/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6003/comments
https://api.github.com/repos/huggingface/datasets/issues/6003/events
https://github.com/huggingface/datasets/issues/6003
1,786,554,110
I_kwDODunzps5qfKb-
6,003
interleave_datasets & DataCollatorForLanguageModeling having a conflict ?
{ "login": "PonteIneptique", "id": 1929830, "node_id": "MDQ6VXNlcjE5Mjk4MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/1929830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PonteIneptique", "html_url": "https://github.com/PonteIneptique", "followers_url": "https://api.github.com/users/PonteIneptique/followers", "following_url": "https://api.github.com/users/PonteIneptique/following{/other_user}", "gists_url": "https://api.github.com/users/PonteIneptique/gists{/gist_id}", "starred_url": "https://api.github.com/users/PonteIneptique/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PonteIneptique/subscriptions", "organizations_url": "https://api.github.com/users/PonteIneptique/orgs", "repos_url": "https://api.github.com/users/PonteIneptique/repos", "events_url": "https://api.github.com/users/PonteIneptique/events{/privacy}", "received_events_url": "https://api.github.com/users/PonteIneptique/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2023-07-03T17:15:31
2023-07-03T17:15:31
null
NONE
null
null
null
### Describe the bug Hi everyone :) I have two local & custom datasets (1 "sentence" per line) which I split along the 95/5 lines for pre-training a Bert model. I use a modified version of `run_mlm.py` in order to be able to make use of `interleave_dataset`: - `tokenize()` runs fine - `group_text()` runs fine Everytime, on step 19, I get ```pytb File "env/lib/python3.9/site-packages/transformers/data/data_collator.py", line 779, in torch_mask_tokens inputs[indices_random] = random_words[indices_random] RuntimeError: Index put requires the source and destination dtypes match, got Float for the destination and Long for the source. ``` I tried: - training without interleave on dataset 1, it runs - training without interleave on dataset 2, it runs - training without `.to_iterable_dataset()`, it hangs then crash - training without group_text() and padding to max_length seemed to fix the issue, but who knows if this was just because it was an issue that would come much later in terms of steps. I might have coded something wrong, but I don't get what ### Steps to reproduce the bug I have this function: ```py def build_dataset(path: str, percent: str): dataset = load_dataset( "text", data_files={"train": [path]}, split=f"train[{percent}]" ) dataset = dataset.map( lambda examples: tokenize(examples["text"]), batched=True, num_proc=num_proc, ) dataset = dataset.map( group_texts, batched=True, num_proc=num_proc, desc=f"Grouping texts in chunks of {tokenizer.max_seq_length}", remove_columns=["text"] ) print(len(dataset)) return dataset.to_iterable_dataset() ``` I hardcoded group_text: ```py def group_texts(examples): # Concatenate all texts. concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()} total_length = len(concatenated_examples[list(examples.keys())[0]]) # We drop the small remainder, and if the total_length < max_seq_length we exclude this batch and return an empty dict. # We could add padding if the model supported it instead of this drop, you can customize this part to your needs. total_length = (total_length // 512) * 512 # Split by chunks of max_len. result = { k: [t[i: i + 512] for i in range(0, total_length, 512)] for k, t in concatenated_examples.items() } # result = {k: [el for el in elements if el] for k, elements in result.items()} return result ``` And then I build datasets using the following code: ```py train1 = build_dataset("d1.txt", ":95%") train2 = build_dataset("d2.txt", ":95%") dev1 = build_dataset("d1.txt", "95%:") dev2 = build_dataset("d2.txt", "95%:") ``` and finally I run ```py train_dataset = interleave_datasets( [train1, train2], probabilities=[0.8, 0.2], seed=42 ) eval_dataset = interleave_datasets( [dev1, dev2], probabilities=[0.8, 0.2], seed=42 ) ``` Then I run the training part which remains mostly untouched: > CUDA_VISIBLE_DEVICES=1 python custom_dataset.py --model_type bert --per_device_train_batch_size 32 --do_train --output_dir /var/mlm/training-bert/model --max_seq_length 512 --save_steps 10000 --save_total_limit 3 --auto_find_batch_size --logging_dir ./logs-bert --learning_rate 0.0001 --do_train --num_train_epochs 25 --warmup_steps 10000 --max_step 45000 --fp16 ### Expected behavior The model should then train normally, but fails every time at the same step (19). printing the variables at `inputs[indices_random] = random_words[indices_random]` shows a magnificient empty tensor (, 32) [if I remember well] ### Environment info transformers[torch] 4.30.2 Ubuntu A100 0 CUDA 12 Driver Version: 525.116.04
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6003/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6003/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6002
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6002/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6002/comments
https://api.github.com/repos/huggingface/datasets/issues/6002/events
https://github.com/huggingface/datasets/pull/6002
1,786,053,060
PR_kwDODunzps5UhP-Z
6,002
Add KLUE-MRC metrics
{ "login": "ingyuseong", "id": 37537248, "node_id": "MDQ6VXNlcjM3NTM3MjQ4", "avatar_url": "https://avatars.githubusercontent.com/u/37537248?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ingyuseong", "html_url": "https://github.com/ingyuseong", "followers_url": "https://api.github.com/users/ingyuseong/followers", "following_url": "https://api.github.com/users/ingyuseong/following{/other_user}", "gists_url": "https://api.github.com/users/ingyuseong/gists{/gist_id}", "starred_url": "https://api.github.com/users/ingyuseong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ingyuseong/subscriptions", "organizations_url": "https://api.github.com/users/ingyuseong/orgs", "repos_url": "https://api.github.com/users/ingyuseong/repos", "events_url": "https://api.github.com/users/ingyuseong/events{/privacy}", "received_events_url": "https://api.github.com/users/ingyuseong/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-07-03T12:11:10
2023-07-09T11:57:20
2023-07-09T11:57:20
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6002", "html_url": "https://github.com/huggingface/datasets/pull/6002", "diff_url": "https://github.com/huggingface/datasets/pull/6002.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6002.patch", "merged_at": null }
## Metrics for KLUE-MRC (Korean Language Understanding Evaluation — Machine Reading Comprehension) Adding metrics for [KLUE-MRC](https://huggingface.co/datasets/klue). KLUE-MRC is very similar to SQuAD 2.0 but has a slightly different format which is why I added metrics for KLUE-MRC. Specifically, in the case of [LM Eval Harness](https://github.com/EleutherAI/lm-evaluation-harness), it leverages the scoring script of SQuAD to evaluate SQuAD 2.0 and KorQuAD. But the script isn't suitable for KLUE-MRC because KLUE-MRC is a bit different from SQuAD 2.0. And this is why I added the scoring script for KLUE-MRC. - [x] All tests passed - [x] Added a metric card (referred the metric card of SQuAD 2.0) - [x] Compatibility test with [LM Eval Harness](https://github.com/EleutherAI/lm-evaluation-harness) passed ### References - [KLUE: Korean Language Understanding Evaluation](https://datasets-benchmarks-proceedings.neurips.cc/paper_files/paper/2021/file/98dce83da57b0395e163467c9dae521b-Paper-round2.pdf) - [KLUE on Hugging Face Datasets](https://huggingface.co/datasets/klue) - #2416
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6002/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6002/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6001
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6001/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6001/comments
https://api.github.com/repos/huggingface/datasets/issues/6001/events
https://github.com/huggingface/datasets/pull/6001
1,782,516,627
PR_kwDODunzps5UVMMh
6,001
Align `column_names` type check with type hint in `sort`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-06-30T13:15:50
2023-06-30T14:18:32
2023-06-30T14:11:24
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6001", "html_url": "https://github.com/huggingface/datasets/pull/6001", "diff_url": "https://github.com/huggingface/datasets/pull/6001.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6001.patch", "merged_at": "2023-06-30T14:11:24" }
Fix #5998
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6001/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6001/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/6000
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6000/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6000/comments
https://api.github.com/repos/huggingface/datasets/issues/6000/events
https://github.com/huggingface/datasets/pull/6000
1,782,456,878
PR_kwDODunzps5UU_FB
6,000
Pin `joblib` to avoid `joblibspark` test failures
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-06-30T12:36:54
2023-06-30T13:17:05
2023-06-30T13:08:27
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6000", "html_url": "https://github.com/huggingface/datasets/pull/6000", "diff_url": "https://github.com/huggingface/datasets/pull/6000.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6000.patch", "merged_at": "2023-06-30T13:08:27" }
`joblibspark` doesn't support the latest `joblib` release. See https://github.com/huggingface/datasets/actions/runs/5401870932/jobs/9812337078 for the errors
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6000/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6000/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/5999
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5999/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5999/comments
https://api.github.com/repos/huggingface/datasets/issues/5999/events
https://github.com/huggingface/datasets/issues/5999
1,781,851,513
I_kwDODunzps5qNOV5
5,999
Getting a 409 error while loading xglue dataset
{ "login": "Praful932", "id": 45713796, "node_id": "MDQ6VXNlcjQ1NzEzNzk2", "avatar_url": "https://avatars.githubusercontent.com/u/45713796?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Praful932", "html_url": "https://github.com/Praful932", "followers_url": "https://api.github.com/users/Praful932/followers", "following_url": "https://api.github.com/users/Praful932/following{/other_user}", "gists_url": "https://api.github.com/users/Praful932/gists{/gist_id}", "starred_url": "https://api.github.com/users/Praful932/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Praful932/subscriptions", "organizations_url": "https://api.github.com/users/Praful932/orgs", "repos_url": "https://api.github.com/users/Praful932/repos", "events_url": "https://api.github.com/users/Praful932/events{/privacy}", "received_events_url": "https://api.github.com/users/Praful932/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
2023-06-30T04:13:54
2023-06-30T05:57:23
2023-06-30T05:57:22
NONE
null
null
null
### Describe the bug Unable to load xglue dataset ### Steps to reproduce the bug ```python import datasets dataset = datasets.load_dataset("xglue", "ntg") ``` > ConnectionError: Couldn't reach https://xglue.blob.core.windows.net/xglue/xglue_full_dataset.tar.gz (error 409) ### Expected behavior Expected the dataset to load ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.15.1 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5999/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5999/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5998
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5998/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5998/comments
https://api.github.com/repos/huggingface/datasets/issues/5998/events
https://github.com/huggingface/datasets/issues/5998
1,781,805,018
I_kwDODunzps5qNC_a
5,998
The current implementation has a potential bug in the sort method
{ "login": "wangyuxinwhy", "id": 22192665, "node_id": "MDQ6VXNlcjIyMTkyNjY1", "avatar_url": "https://avatars.githubusercontent.com/u/22192665?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wangyuxinwhy", "html_url": "https://github.com/wangyuxinwhy", "followers_url": "https://api.github.com/users/wangyuxinwhy/followers", "following_url": "https://api.github.com/users/wangyuxinwhy/following{/other_user}", "gists_url": "https://api.github.com/users/wangyuxinwhy/gists{/gist_id}", "starred_url": "https://api.github.com/users/wangyuxinwhy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wangyuxinwhy/subscriptions", "organizations_url": "https://api.github.com/users/wangyuxinwhy/orgs", "repos_url": "https://api.github.com/users/wangyuxinwhy/repos", "events_url": "https://api.github.com/users/wangyuxinwhy/events{/privacy}", "received_events_url": "https://api.github.com/users/wangyuxinwhy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-06-30T03:16:57
2023-06-30T14:21:03
2023-06-30T14:11:25
NONE
null
null
null
### Describe the bug In the sort method,here's a piece of code ```python # column_names: Union[str, Sequence_[str]] # Check proper format of and for duplicates in column_names if not isinstance(column_names, list): column_names = [column_names] ``` I get an error when I pass in a tuple based on the column_names type annotation, it will raise an errror.As in the example below, while the type annotation implies that a tuple can be passed. ```python from datasets import load_dataset dataset = load_dataset('glue', 'ax')['test'] dataset.sort(column_names=('premise', 'hypothesis')) # Raise ValueError: Column '('premise', 'hypothesis')' not found in the dataset. ``` Of course, after I modified the tuple into a list, everything worked fine Change the code to the following so there will be no problem ```python # Check proper format of and for duplicates in column_names if not isinstance(column_names, list): if isinstance(column_names, str): column_names = [column_names] else: column_names = list(column_names) ``` ### Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('glue', 'ax')['test'] dataset.sort(column_names=('premise', 'hypothesis')) # Raise ValueError: Column '('premise', 'hypothesis')' not found in the dataset. ``` ### Expected behavior Passing tuple into column_names should be equivalent to passing list ### Environment info - `datasets` version: 2.13.0 - Platform: macOS-13.1-arm64-arm-64bit - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.1 - Pandas version: 2.0.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5998/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5998/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5997
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5997/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5997/comments
https://api.github.com/repos/huggingface/datasets/issues/5997/events
https://github.com/huggingface/datasets/issues/5997
1,781,582,818
I_kwDODunzps5qMMvi
5,997
extend the map function so it can wrap around long text that does not fit in the context window
{ "login": "siddhsql", "id": 127623723, "node_id": "U_kgDOB5tiKw", "avatar_url": "https://avatars.githubusercontent.com/u/127623723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/siddhsql", "html_url": "https://github.com/siddhsql", "followers_url": "https://api.github.com/users/siddhsql/followers", "following_url": "https://api.github.com/users/siddhsql/following{/other_user}", "gists_url": "https://api.github.com/users/siddhsql/gists{/gist_id}", "starred_url": "https://api.github.com/users/siddhsql/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/siddhsql/subscriptions", "organizations_url": "https://api.github.com/users/siddhsql/orgs", "repos_url": "https://api.github.com/users/siddhsql/repos", "events_url": "https://api.github.com/users/siddhsql/events{/privacy}", "received_events_url": "https://api.github.com/users/siddhsql/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
2
2023-06-29T22:15:21
2023-07-03T17:58:52
null
NONE
null
null
null
### Feature request I understand `dataset` provides a [`map`](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L2849) function. This function in turn takes in a callable that is used to tokenize the text on which a model is trained. Frequently this text will not fit within a models's context window. In this case it would be useful to wrap around the text into multiple rows with each row fitting the model's context window. I tried to do it using this code as example which in turn I have borrowed from [here](https://stackoverflow.com/a/76343993/147530): ``` data = data.map(lambda samples: tokenizer(samples["text"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True) ``` but running the code gives me this error: ``` File "/llm/fine-tune.py", line 117, in <module> data = data.map(lambda samples: tokenizer(samples["text"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True) File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 580, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 545, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3087, in map for rank, done, content in Dataset._map_single(**dataset_kwargs): File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3480, in _map_single writer.write_batch(batch) File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_writer.py", line 556, in write_batch pa_table = pa.Table.from_arrays(arrays, schema=schema) File "pyarrow/table.pxi", line 3798, in pyarrow.lib.Table.from_arrays File "pyarrow/table.pxi", line 2962, in pyarrow.lib.Table.validate File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Column 1 named input_ids expected length 394 but got length 447 ``` The lambda function I have provided is correctly chopping up long text so it wraps around (and because of this 394 samples become 447 after wrap around) but the dataset `map` function does not like it. ### Motivation please see above ### Your contribution I'm afraid I don't have much knowledge to help
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5997/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5997/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/5996
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5996/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5996/comments
https://api.github.com/repos/huggingface/datasets/issues/5996/events
https://github.com/huggingface/datasets/pull/5996
1,779,294,374
PR_kwDODunzps5UKP0i
5,996
Deprecate `use_auth_token` in favor of `token`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
9
2023-06-28T16:26:38
2023-07-05T15:22:20
2023-07-03T16:03:33
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5996", "html_url": "https://github.com/huggingface/datasets/pull/5996", "diff_url": "https://github.com/huggingface/datasets/pull/5996.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5996.patch", "merged_at": "2023-07-03T16:03:33" }
... to be consistent with `transformers` and `huggingface_hub`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5996/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5996/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/5995
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5995/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5995/comments
https://api.github.com/repos/huggingface/datasets/issues/5995/events
https://github.com/huggingface/datasets/pull/5995
1,777,088,925
PR_kwDODunzps5UCvYJ
5,995
Support returning dataframe in map transform
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-06-27T14:15:08
2023-06-28T13:56:02
2023-06-28T13:46:33
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5995", "html_url": "https://github.com/huggingface/datasets/pull/5995", "diff_url": "https://github.com/huggingface/datasets/pull/5995.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5995.patch", "merged_at": "2023-06-28T13:46:33" }
Allow returning Pandas DataFrames in `map` transforms. (Plus, raise an error in the non-batched mode if a returned PyArrow table/Pandas DataFrame has more than one row)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5995/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5995/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/5994
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5994/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5994/comments
https://api.github.com/repos/huggingface/datasets/issues/5994/events
https://github.com/huggingface/datasets/pull/5994
1,776,829,004
PR_kwDODunzps5UB1cA
5,994
Fix select_columns columns order
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-06-27T12:32:46
2023-06-27T15:40:47
2023-06-27T15:32:43
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5994", "html_url": "https://github.com/huggingface/datasets/pull/5994", "diff_url": "https://github.com/huggingface/datasets/pull/5994.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5994.patch", "merged_at": "2023-06-27T15:32:43" }
Fix the order of the columns in dataset.features when the order changes with `dataset.select_columns()`. I also fixed the same issue for `dataset.flatten()` Close https://github.com/huggingface/datasets/issues/5993
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5994/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5994/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/5993
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5993/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5993/comments
https://api.github.com/repos/huggingface/datasets/issues/5993/events
https://github.com/huggingface/datasets/issues/5993
1,776,643,555
I_kwDODunzps5p5W3j
5,993
ValueError: Table schema does not match schema used to create file
{ "login": "exs-avianello", "id": 128361578, "node_id": "U_kgDOB6akag", "avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4", "gravatar_id": "", "url": "https://api.github.com/users/exs-avianello", "html_url": "https://github.com/exs-avianello", "followers_url": "https://api.github.com/users/exs-avianello/followers", "following_url": "https://api.github.com/users/exs-avianello/following{/other_user}", "gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}", "starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions", "organizations_url": "https://api.github.com/users/exs-avianello/orgs", "repos_url": "https://api.github.com/users/exs-avianello/repos", "events_url": "https://api.github.com/users/exs-avianello/events{/privacy}", "received_events_url": "https://api.github.com/users/exs-avianello/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
2
2023-06-27T10:54:07
2023-06-27T15:36:42
2023-06-27T15:32:44
NONE
null
null
null
### Describe the bug Saving a dataset as parquet fails with a `ValueError: Table schema does not match schema used to create file` if the dataset was obtained out of a `.select_columns()` call with columns selected out of order. ### Steps to reproduce the bug ```python import datasets dataset = datasets.Dataset.from_dict( { "x1": [1, 2, 3], "x2": [10, 11, 12], } ) ds = dataset.select_columns(["x2", "x1"]) ds.to_parquet("demo.parquet") ``` ```shell >>> ValueError: Table schema does not match schema used to create file: table: x2: int64 x1: int64 -- schema metadata -- huggingface: '{"info": {"features": {"x2": {"dtype": "int64", "_type": "V' + 53 vs. file: x1: int64 x2: int64 -- schema metadata -- huggingface: '{"info": {"features": {"x1": {"dtype": "int64", "_type": "V' + 53 ``` --- I think this is because after the `.select_columns()` call with out of order columns, the output dataset features' schema ends up being out of sync with the schema of the arrow table backing it. ```python ds.features.arrow_schema >>> x1: int64 x2: int64 -- schema metadata -- huggingface: '{"info": {"features": {"x1": {"dtype": "int64", "_type": "V' + 53 ds.data.schema >>> x2: int64 x1: int64 -- schema metadata -- huggingface: '{"info": {"features": {"x2": {"dtype": "int64", "_type": "V' + 53 ``` So when we call `.to_parquet()`, the call behind the scenes to `datasets.io.parquet.ParquetDatasetWriter(...).write()` which initialises the backend `pyarrow.parquet.ParquetWriter` with `schema = self.dataset.features.arrow_schema` triggers `pyarrow` on write when [it checks](https://github.com/apache/arrow/blob/11b140a734a516e436adaddaeb35d23f30dcce44/python/pyarrow/parquet/core.py#L1086-L1090) that the `ParquetWriter` schema matches the schema of the table being written 🙌 https://github.com/huggingface/datasets/blob/6ed837325cb539a5deb99129e5ad181d0269e050/src/datasets/io/parquet.py#L139-L141 ### Expected behavior The dataset gets successfully saved as parquet. *In the same way as it does if saving it as csv: ```python import datasets dataset = datasets.Dataset.from_dict( { "x1": [1, 2, 3], "x2": [10, 11, 12], } ) ds = dataset.select_columns(["x2", "x1"]) ds.to_csv("demo.csv") ``` ### Environment info `python==3.11` `datasets==2.13.1`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5993/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5993/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5992
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5992/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5992/comments
https://api.github.com/repos/huggingface/datasets/issues/5992/events
https://github.com/huggingface/datasets/pull/5992
1,776,460,964
PR_kwDODunzps5UAk3C
5,992
speedup
{ "login": "qgallouedec", "id": 45557362, "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qgallouedec", "html_url": "https://github.com/qgallouedec", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "repos_url": "https://api.github.com/users/qgallouedec/repos", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-06-27T09:17:58
2023-06-27T09:23:07
2023-06-27T09:18:04
CONTRIBUTOR
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5992", "html_url": "https://github.com/huggingface/datasets/pull/5992", "diff_url": "https://github.com/huggingface/datasets/pull/5992.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5992.patch", "merged_at": null }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5992/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5992/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/5991
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5991/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5991/comments
https://api.github.com/repos/huggingface/datasets/issues/5991/events
https://github.com/huggingface/datasets/issues/5991
1,774,456,518
I_kwDODunzps5pxA7G
5,991
`map` with any joblib backend
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
0
2023-06-26T10:33:42
2023-06-26T10:33:42
null
MEMBER
null
null
null
We recently enabled the (experimental) parallel backend switch for data download and extraction but not for `map` yet. Right now we're using our `iflatmap_unordered` implementation for multiprocessing that uses a shared Queue to gather progress updates from the subprocesses and show a progress bar in the main process. If a Queue implementation that would work on any joblib backend by leveraging the filesystem that is shared among workers, we can have `iflatmap_unordered` for joblib and therefore a `map` with any joblib backend with a progress bar ! Note that the Queue doesn't need to be that optimized though since we can choose a small frequency for progress updates (like 1 update per second).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5991/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5991/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/5989
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5989/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5989/comments
https://api.github.com/repos/huggingface/datasets/issues/5989/events
https://github.com/huggingface/datasets/issues/5989
1,774,134,091
I_kwDODunzps5pvyNL
5,989
Set a rule on the config and split names
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
3
2023-06-26T07:34:14
2023-07-19T14:22:54
null
CONTRIBUTOR
null
null
null
> should we actually allow characters like spaces? maybe it's better to add validation for whitespace symbols and directly in datasets and raise https://github.com/huggingface/datasets-server/issues/853
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5989/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5989/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/5988
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5988/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5988/comments
https://api.github.com/repos/huggingface/datasets/issues/5988/events
https://github.com/huggingface/datasets/issues/5988
1,773,257,828
I_kwDODunzps5pscRk
5,988
ConnectionError: Couldn't reach dataset_infos.json
{ "login": "yulingao", "id": 20674868, "node_id": "MDQ6VXNlcjIwNjc0ODY4", "avatar_url": "https://avatars.githubusercontent.com/u/20674868?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yulingao", "html_url": "https://github.com/yulingao", "followers_url": "https://api.github.com/users/yulingao/followers", "following_url": "https://api.github.com/users/yulingao/following{/other_user}", "gists_url": "https://api.github.com/users/yulingao/gists{/gist_id}", "starred_url": "https://api.github.com/users/yulingao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yulingao/subscriptions", "organizations_url": "https://api.github.com/users/yulingao/orgs", "repos_url": "https://api.github.com/users/yulingao/repos", "events_url": "https://api.github.com/users/yulingao/events{/privacy}", "received_events_url": "https://api.github.com/users/yulingao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-06-25T12:39:31
2023-07-07T13:20:57
2023-07-07T13:20:57
NONE
null
null
null
### Describe the bug I'm trying to load codeparrot/codeparrot-clean-train, but get the following error: ConnectionError: Couldn't reach https://huggingface.co/datasets/codeparrot/codeparrot-clean-train/resolve/main/dataset_infos.json (ConnectionError(ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')))) ### Steps to reproduce the bug train_data = load_dataset('codeparrot/codeparrot-clean-train', split='train') ### Expected behavior download the dataset ### Environment info centos7
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5988/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5988/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5987
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5987/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5987/comments
https://api.github.com/repos/huggingface/datasets/issues/5987/events
https://github.com/huggingface/datasets/issues/5987
1,773,047,909
I_kwDODunzps5prpBl
5,987
Why max_shard_size is not supported in load_dataset and passed to download_and_prepare
{ "login": "npuichigo", "id": 11533479, "node_id": "MDQ6VXNlcjExNTMzNDc5", "avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4", "gravatar_id": "", "url": "https://api.github.com/users/npuichigo", "html_url": "https://github.com/npuichigo", "followers_url": "https://api.github.com/users/npuichigo/followers", "following_url": "https://api.github.com/users/npuichigo/following{/other_user}", "gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}", "starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions", "organizations_url": "https://api.github.com/users/npuichigo/orgs", "repos_url": "https://api.github.com/users/npuichigo/repos", "events_url": "https://api.github.com/users/npuichigo/events{/privacy}", "received_events_url": "https://api.github.com/users/npuichigo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
5
2023-06-25T04:19:13
2023-06-29T16:06:08
2023-06-29T16:06:08
NONE
null
null
null
### Describe the bug https://github.com/huggingface/datasets/blob/a8a797cc92e860c8d0df71e0aa826f4d2690713e/src/datasets/load.py#L1809 What I can to is break the `load_dataset` and use `load_datset_builder` + `download_and_prepare` instead. ### Steps to reproduce the bug https://github.com/huggingface/datasets/blob/a8a797cc92e860c8d0df71e0aa826f4d2690713e/src/datasets/load.py#L1809 ### Expected behavior Users can define the max shard size. ### Environment info datasets==2.13.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5987/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5987/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5986
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5986/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5986/comments
https://api.github.com/repos/huggingface/datasets/issues/5986/events
https://github.com/huggingface/datasets/pull/5986
1,772,233,111
PR_kwDODunzps5TygOZ
5,986
Make IterableDataset.from_spark more efficient
{ "login": "mathewjacob1002", "id": 134338709, "node_id": "U_kgDOCAHYlQ", "avatar_url": "https://avatars.githubusercontent.com/u/134338709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mathewjacob1002", "html_url": "https://github.com/mathewjacob1002", "followers_url": "https://api.github.com/users/mathewjacob1002/followers", "following_url": "https://api.github.com/users/mathewjacob1002/following{/other_user}", "gists_url": "https://api.github.com/users/mathewjacob1002/gists{/gist_id}", "starred_url": "https://api.github.com/users/mathewjacob1002/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mathewjacob1002/subscriptions", "organizations_url": "https://api.github.com/users/mathewjacob1002/orgs", "repos_url": "https://api.github.com/users/mathewjacob1002/repos", "events_url": "https://api.github.com/users/mathewjacob1002/events{/privacy}", "received_events_url": "https://api.github.com/users/mathewjacob1002/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
6
2023-06-23T22:18:20
2023-07-07T10:05:58
2023-07-07T09:56:09
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5986", "html_url": "https://github.com/huggingface/datasets/pull/5986", "diff_url": "https://github.com/huggingface/datasets/pull/5986.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5986.patch", "merged_at": "2023-07-07T09:56:09" }
Moved the code from using collect() to using toLocalIterator, which allows for prefetching partitions that will be selected next, thus allowing for better performance when iterating.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5986/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5986/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/5985
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5985/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5985/comments
https://api.github.com/repos/huggingface/datasets/issues/5985/events
https://github.com/huggingface/datasets/issues/5985
1,771,588,158
I_kwDODunzps5pmEo-
5,985
Cannot reuse tokenizer object for dataset map
{ "login": "vikigenius", "id": 12724810, "node_id": "MDQ6VXNlcjEyNzI0ODEw", "avatar_url": "https://avatars.githubusercontent.com/u/12724810?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vikigenius", "html_url": "https://github.com/vikigenius", "followers_url": "https://api.github.com/users/vikigenius/followers", "following_url": "https://api.github.com/users/vikigenius/following{/other_user}", "gists_url": "https://api.github.com/users/vikigenius/gists{/gist_id}", "starred_url": "https://api.github.com/users/vikigenius/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vikigenius/subscriptions", "organizations_url": "https://api.github.com/users/vikigenius/orgs", "repos_url": "https://api.github.com/users/vikigenius/repos", "events_url": "https://api.github.com/users/vikigenius/events{/privacy}", "received_events_url": "https://api.github.com/users/vikigenius/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" } ]
closed
false
null
[]
null
2
2023-06-23T14:45:31
2023-07-21T14:09:14
2023-07-21T14:09:14
NONE
null
null
null
### Describe the bug Related to https://github.com/huggingface/transformers/issues/24441. Not sure if this is a tokenizer issue or caching issue, so filing in both. Passing the tokenizer to the dataset map function causes the tokenizer to be fingerprinted weirdly. After calling the tokenizer with arguments like padding and truncation the tokenizer object changes interanally, even though the hash remains the same. But dumps is able to detect that internal change which causes the tokenizer object's fingerprint to change. ### Steps to reproduce the bug ```python from transformers import AutoTokenizer from datasets.utils.py_utils import dumps # Huggingface datasets t = AutoTokenizer.from_pretrained('bert-base-uncased') t.save_pretrained("tok1") th1 = hash(dumps(t)) text = "This is an example text" ttext = t(text, max_length=512, padding="max_length", truncation=True) t.save_pretrained("tok2") th2 = hash(dumps(t)) assert th1 == th2 # Assertion Error ``` But if you use just the hash of the object without dumps, the hashes don't change ```python from transformers import AutoTokenizer from datasets.utils.py_utils import dumps # Huggingface datasets t = AutoTokenizer.from_pretrained('bert-base-uncased') th1 = hash(t) # Just hash no dumps text = "This is an example text" ttext = t(text, max_length=512, padding="max_length", truncation=True) th2 = hash(t) # Just hash no dumps assert th1 == th2 # This is OK ``` This causes situations such as the following 1. Create a text file like this `yes "This is an example text" | head -n 10000 > lines.txt` ```python from transformers import AutoTokenizer import datasets class TokenizeMapper(object): """Mapper for tokenizer. This is needed because the caching mechanism of HuggingFace does not work on lambdas. Each time a new lambda will be created by a new process which will lead to a different hash. This way we can have a universal mapper object in init and reuse it with the same hash for each process. """ def __init__(self, tokenizer): """Initialize the tokenizer.""" self.tokenizer = tokenizer def __call__(self, examples, **kwargs): """Run the mapper.""" texts = examples["text"] tt = self.tokenizer(texts, max_length=256, padding="max_length", truncation=True) batch_outputs = { "input_ids": tt.input_ids, "attention_mask": tt.attention_mask, } return batch_outputs t = AutoTokenizer.from_pretrained('bert-base-uncased') mapper = TokenizeMapper(t) ds = datasets.load_dataset("text", data_files="lines.txt") mds1 = ds.map( mapper, batched=False, remove_columns=["text"], ).with_format("torch") mds2 = ds.map( mapper, batched=False, remove_columns=["text"], ).with_format("torch") ``` The second call to map should reuse the cached processed dataset from mds1, but it instead it redoes the tokenization because of the behavior of dumps. ### Expected behavior We should be able to initialize a tokenizer. And reusing it should let us reuse the same map computation for the same dataset. The second call to map should reuse the cached processed dataset from mds1, but it instead it redoes the tokenization because of the behavior of dumps. ### Environment info - `datasets` version: 2.13.0 - Platform: Linux-6.1.31_1-x86_64-with-glibc2.36 - Python version: 3.9.16 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.1 - Pandas version: 2.0.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5985/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5985/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5984
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5984/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5984/comments
https://api.github.com/repos/huggingface/datasets/issues/5984/events
https://github.com/huggingface/datasets/issues/5984
1,771,571,458
I_kwDODunzps5pmAkC
5,984
AutoSharding IterableDataset's when num_workers > 1
{ "login": "mathephysicist", "id": 25594384, "node_id": "MDQ6VXNlcjI1NTk0Mzg0", "avatar_url": "https://avatars.githubusercontent.com/u/25594384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mathephysicist", "html_url": "https://github.com/mathephysicist", "followers_url": "https://api.github.com/users/mathephysicist/followers", "following_url": "https://api.github.com/users/mathephysicist/following{/other_user}", "gists_url": "https://api.github.com/users/mathephysicist/gists{/gist_id}", "starred_url": "https://api.github.com/users/mathephysicist/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mathephysicist/subscriptions", "organizations_url": "https://api.github.com/users/mathephysicist/orgs", "repos_url": "https://api.github.com/users/mathephysicist/repos", "events_url": "https://api.github.com/users/mathephysicist/events{/privacy}", "received_events_url": "https://api.github.com/users/mathephysicist/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
6
2023-06-23T14:34:20
2023-07-04T17:03:56
null
NONE
null
null
null
### Feature request Minimal Example ``` import torch from datasets import IterableDataset d = IterableDataset.from_file(<file_name>) dl = torch.utils.data.dataloader.DataLoader(d,num_workers=3) for sample in dl: print(sample) ``` Warning: Too many dataloader workers: 2 (max is dataset.n_shards=1). Stopping 1 dataloader workers. To parallelize data loading, we give each process some shards (or data sources) to process. Therefore it's unnecessary to have a number of workers greater than dataset.n_shards=1. To enable more parallelism, please split the dataset in more files than 1. Expected Behavior: Dataset is sharded each cpu uses subset (contiguously - so you can do checkpoint loading/saving) ### Motivation I have a lot of unused cpu's and would like to be able to shard iterable datasets with pytorch's dataloader when num_workers > 1. This is for a very large single file. I am aware that we can use the `split_dataset_by_node` to ensure that each node (for distributed) gets different shards, but we should extend it so that this also continues for multiple workers. ### Your contribution If someone points me to what needs to change, I can create a PR.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5984/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5984/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/5983
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5983/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5983/comments
https://api.github.com/repos/huggingface/datasets/issues/5983/events
https://github.com/huggingface/datasets/pull/5983
1,770,578,804
PR_kwDODunzps5TtDdy
5,983
replaced PathLike as a variable for save_to_disk for dataset_path wit…
{ "login": "benjaminbrown038", "id": 35114142, "node_id": "MDQ6VXNlcjM1MTE0MTQy", "avatar_url": "https://avatars.githubusercontent.com/u/35114142?v=4", "gravatar_id": "", "url": "https://api.github.com/users/benjaminbrown038", "html_url": "https://github.com/benjaminbrown038", "followers_url": "https://api.github.com/users/benjaminbrown038/followers", "following_url": "https://api.github.com/users/benjaminbrown038/following{/other_user}", "gists_url": "https://api.github.com/users/benjaminbrown038/gists{/gist_id}", "starred_url": "https://api.github.com/users/benjaminbrown038/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benjaminbrown038/subscriptions", "organizations_url": "https://api.github.com/users/benjaminbrown038/orgs", "repos_url": "https://api.github.com/users/benjaminbrown038/repos", "events_url": "https://api.github.com/users/benjaminbrown038/events{/privacy}", "received_events_url": "https://api.github.com/users/benjaminbrown038/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2023-06-23T00:57:05
2023-06-23T00:57:05
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5983", "html_url": "https://github.com/huggingface/datasets/pull/5983", "diff_url": "https://github.com/huggingface/datasets/pull/5983.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5983.patch", "merged_at": null }
…h str like that of load_from_disk
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5983/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5983/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/5982
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5982/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5982/comments
https://api.github.com/repos/huggingface/datasets/issues/5982/events
https://github.com/huggingface/datasets/issues/5982
1,770,333,296
I_kwDODunzps5phSRw
5,982
404 on Datasets Documentation Page
{ "login": "kmulka-bloomberg", "id": 118509387, "node_id": "U_kgDOBxBPSw", "avatar_url": "https://avatars.githubusercontent.com/u/118509387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kmulka-bloomberg", "html_url": "https://github.com/kmulka-bloomberg", "followers_url": "https://api.github.com/users/kmulka-bloomberg/followers", "following_url": "https://api.github.com/users/kmulka-bloomberg/following{/other_user}", "gists_url": "https://api.github.com/users/kmulka-bloomberg/gists{/gist_id}", "starred_url": "https://api.github.com/users/kmulka-bloomberg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kmulka-bloomberg/subscriptions", "organizations_url": "https://api.github.com/users/kmulka-bloomberg/orgs", "repos_url": "https://api.github.com/users/kmulka-bloomberg/repos", "events_url": "https://api.github.com/users/kmulka-bloomberg/events{/privacy}", "received_events_url": "https://api.github.com/users/kmulka-bloomberg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-06-22T20:14:57
2023-06-26T15:45:03
2023-06-26T15:45:03
NONE
null
null
null
### Describe the bug Getting a 404 from the Hugging Face Datasets docs page: https://huggingface.co/docs/datasets/index ### Steps to reproduce the bug 1. Go to URL https://huggingface.co/docs/datasets/index 2. Notice 404 not found ### Expected behavior URL should either show docs or redirect to new location ### Environment info hugginface.co
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5982/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5982/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5981
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5981/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5981/comments
https://api.github.com/repos/huggingface/datasets/issues/5981/events
https://github.com/huggingface/datasets/issues/5981
1,770,310,087
I_kwDODunzps5phMnH
5,981
Only two cores are getting used in sagemaker with pytorch 3.10 kernel
{ "login": "mmr-crexi", "id": 107141022, "node_id": "U_kgDOBmLXng", "avatar_url": "https://avatars.githubusercontent.com/u/107141022?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mmr-crexi", "html_url": "https://github.com/mmr-crexi", "followers_url": "https://api.github.com/users/mmr-crexi/followers", "following_url": "https://api.github.com/users/mmr-crexi/following{/other_user}", "gists_url": "https://api.github.com/users/mmr-crexi/gists{/gist_id}", "starred_url": "https://api.github.com/users/mmr-crexi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mmr-crexi/subscriptions", "organizations_url": "https://api.github.com/users/mmr-crexi/orgs", "repos_url": "https://api.github.com/users/mmr-crexi/repos", "events_url": "https://api.github.com/users/mmr-crexi/events{/privacy}", "received_events_url": "https://api.github.com/users/mmr-crexi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-06-22T19:57:31
2023-07-24T11:54:52
2023-07-24T11:54:52
NONE
null
null
null
### Describe the bug When using the newer pytorch 3.10 kernel, only 2 cores are being used by huggingface filter and map functions. The Pytorch 3.9 kernel would use as many cores as specified in the num_proc field. We have solved this in our own code by placing the following snippet in the code that is called inside subprocesses: ```os.sched_setaffinity(0, {i for i in range(1000)})``` The problem, as near as we can tell, us that once upon a time, cpu affinity was set using a bitmask ("0xfffff" and the like), and affinity recently changed to a list of processors rather than to using the mask. As such, only processors 1 and 17 are shown to be working in htop. ![Selection_072](https://github.com/huggingface/datasets/assets/107141022/04c5a824-5321-4531-afca-7bc84dff36b4) When running functions via `map`, the above resetting of affinity works to spread across the cores. When using `filter`, however, only two cores are active. ### Steps to reproduce the bug Repro steps: 1. Create an aws sagemaker instance 2. use the pytorch 3_10 kernel 3. Load a dataset 4. run a filter operation 5. watch as only 2 cores are used when num_proc > 2 6. run a map operation 7. watch as only 2 cores are used when num_proc > 2 8. run a map operation with processor affinity reset inside the function called via map 9. Watch as all cores run ### Expected behavior All specified cores are used via the num_proc argument. ### Environment info AWS sagemaker with the following init script run in the terminal after instance creation: conda init bash bash conda activate pytorch_p310 pip install Wand PyPDF pytesseract datasets seqeval pdfplumber transformers pymupdf sentencepiece timm donut-python accelerate optimum xgboost python -m pip install 'git+https://github.com/facebookresearch/detectron2.git' sudo yum -y install htop sudo yum -y update sudo yum -y install wget libstdc++ autoconf automake libtool autoconf-archive pkg-config gcc gcc-c++ make libjpeg-devel libpng-devel libtiff-devel zlib-devel
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5981/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5981/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5980
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5980/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5980/comments
https://api.github.com/repos/huggingface/datasets/issues/5980/events
https://github.com/huggingface/datasets/issues/5980
1,770,255,973
I_kwDODunzps5pg_Zl
5,980
Viewing dataset card returns “502 Bad Gateway”
{ "login": "tbenthompson", "id": 4241811, "node_id": "MDQ6VXNlcjQyNDE4MTE=", "avatar_url": "https://avatars.githubusercontent.com/u/4241811?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tbenthompson", "html_url": "https://github.com/tbenthompson", "followers_url": "https://api.github.com/users/tbenthompson/followers", "following_url": "https://api.github.com/users/tbenthompson/following{/other_user}", "gists_url": "https://api.github.com/users/tbenthompson/gists{/gist_id}", "starred_url": "https://api.github.com/users/tbenthompson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tbenthompson/subscriptions", "organizations_url": "https://api.github.com/users/tbenthompson/orgs", "repos_url": "https://api.github.com/users/tbenthompson/repos", "events_url": "https://api.github.com/users/tbenthompson/events{/privacy}", "received_events_url": "https://api.github.com/users/tbenthompson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-06-22T19:14:48
2023-06-27T08:38:19
2023-06-26T14:42:45
NONE
null
null
null
The url is: https://huggingface.co/datasets/Confirm-Labs/pile_ngrams_trigrams I am able to successfully view the “Files and versions” tab: [Confirm-Labs/pile_ngrams_trigrams at main](https://huggingface.co/datasets/Confirm-Labs/pile_ngrams_trigrams/tree/main) Any help would be appreciated! Thanks! I hope this is the right place to report an issue like this.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5980/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5980/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5979
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5979/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5979/comments
https://api.github.com/repos/huggingface/datasets/issues/5979/events
https://github.com/huggingface/datasets/pull/5979
1,770,198,250
PR_kwDODunzps5TrxS_
5,979
set dev version
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-06-22T18:32:14
2023-06-22T18:42:22
2023-06-22T18:32:22
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5979", "html_url": "https://github.com/huggingface/datasets/pull/5979", "diff_url": "https://github.com/huggingface/datasets/pull/5979.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5979.patch", "merged_at": "2023-06-22T18:32:22" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5979/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5979/timeline
null
null