Unnamed: 0
int64 0
4.89k
| url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.34B
| node_id
stringlengths 18
32
| number
int64 1
4.87k
| title
stringlengths 1
276
| user
stringlengths 870
1.16k
| labels
stringclasses 64
values | state
stringclasses 2
values | locked
bool 1
class | assignee
stringclasses 38
values | assignees
stringclasses 48
values | milestone
stringclasses 7
values | comments
stringlengths 2
53k
| created_at
stringlengths 19
19
| updated_at
stringlengths 19
19
| closed_at
stringlengths 19
19
⌀ | author_association
stringclasses 3
values | active_lock_reason
float64 | draft
bool 2
classes | pull_request
stringlengths 289
336
⌀ | body
stringlengths 1
228k
⌀ | reactions
stringlengths 191
196
| timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
200 | https://api.github.com/repos/huggingface/datasets/issues/4764 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4764/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4764/comments | https://api.github.com/repos/huggingface/datasets/issues/4764/events | https://github.com/huggingface/datasets/pull/4764 | 1,321,295,961 | PR_kwDODunzps48RMLu | 4,764 | Update CI badge | {'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/followers', 'following_url': 'https://api.github.com/users/mariosasko/following{/other_user}', 'gists_url': 'https://api.github.com/users/mariosasko/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mariosasko/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mariosasko/subscriptions', 'organizations_url': 'https://api.github.com/users/mariosasko/orgs', 'repos_url': 'https://api.github.com/users/mariosasko/repos', 'events_url': 'https://api.github.com/users/mariosasko/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mariosasko/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-28 18:04:20 | 2022-07-29 11:36:37 | 2022-07-29 11:23:51 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4764', 'html_url': 'https://github.com/huggingface/datasets/pull/4764', 'diff_url': 'https://github.com/huggingface/datasets/pull/4764.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4764.patch', 'merged_at': datetime.datetime(2022, 7, 29, 11, 23, 51)} | Replace the old CircleCI badge with a new one for GH Actions. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4764/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4764/timeline | null | null | true |
201 | https://api.github.com/repos/huggingface/datasets/issues/4763 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4763/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4763/comments | https://api.github.com/repos/huggingface/datasets/issues/4763/events | https://github.com/huggingface/datasets/pull/4763 | 1,321,295,876 | PR_kwDODunzps48RMKi | 4,763 | More rigorous shape inference in to_tf_dataset | {'login': 'Rocketknight1', 'id': 12866554, 'node_id': 'MDQ6VXNlcjEyODY2NTU0', 'avatar_url': 'https://avatars.githubusercontent.com/u/12866554?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/Rocketknight1', 'html_url': 'https://github.com/Rocketknight1', 'followers_url': 'https://api.github.com/users/Rocketknight1/followers', 'following_url': 'https://api.github.com/users/Rocketknight1/following{/other_user}', 'gists_url': 'https://api.github.com/users/Rocketknight1/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/Rocketknight1/subscriptions', 'organizations_url': 'https://api.github.com/users/Rocketknight1/orgs', 'repos_url': 'https://api.github.com/users/Rocketknight1/repos', 'events_url': 'https://api.github.com/users/Rocketknight1/events{/privacy}', 'received_events_url': 'https://api.github.com/users/Rocketknight1/received_events', 'type': 'User', 'site_admin': False} | [] | open | false | null | [] | null | ['The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4763). All of your documentation changes will be reflected on that endpoint.'] | 2022-07-28 18:04:15 | 2022-08-19 11:40:51 | null | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4763', 'html_url': 'https://github.com/huggingface/datasets/pull/4763', 'diff_url': 'https://github.com/huggingface/datasets/pull/4763.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4763.patch', 'merged_at': None} | `tf.data` needs to know the shape of tensors emitted from a `tf.data.Dataset`. Although `None` dimensions are possible, overusing them can cause problems - Keras uses the dataset tensor spec at compile-time, and so saying that a dimension is `None` when it's actually constant can hurt performance, or even cause training to fail for dimensions that are needed to determine the shape of weight tensors!
The compromise I used here was to sample several batches from the underlying dataset and apply the `collate_fn` to them, and then to see which dimensions were "empirically variable". There's an obvious problem here, though - if you sample 10 batches and they all have the same shape on a certain dimension, there's still a small chance that the 11th batch will be different, and Keras will throw an error if a dataset tries to emit a tensor whose shape doesn't match the spec.
I encountered this bug in practice once or twice for datasets that were mostly-but-not-totally constant on a given dimension, and I still don't have a perfect solution, but this PR should greatly reduce the risk. It samples many more batches, and also samples very small batches (size 2) - this increases the variability, making it more likely that a few outlier samples will be detected.
Ideally, of course, we'd determine the full output shape analytically, but that's surprisingly tricky when the `collate_fn` can be any arbitrary Python code! | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4763/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4763/timeline | null | null | true |
202 | https://api.github.com/repos/huggingface/datasets/issues/4762 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4762/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4762/comments | https://api.github.com/repos/huggingface/datasets/issues/4762/events | https://github.com/huggingface/datasets/pull/4762 | 1,321,261,733 | PR_kwDODunzps48RE56 | 4,762 | Improve features resolution in streaming | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False} | [] | open | false | null | [] | null | ['The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4762). All of your documentation changes will be reflected on that endpoint.'] | 2022-07-28 17:28:11 | 2022-07-28 17:34:54 | null | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4762', 'html_url': 'https://github.com/huggingface/datasets/pull/4762', 'diff_url': 'https://github.com/huggingface/datasets/pull/4762.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4762.patch', 'merged_at': None} | `IterableDataset._resolve_features` was returning the features sorted alphabetically by column name, which is not consistent with non-streaming. I changed this and used the order of columns from the data themselves. It was causing some inconsistencies in the dataset viewer as well.
I also fixed `interleave_datasets` that was not filling missing columns with None, because it was not using the columns from `IterableDataset._resolve_features`
cc @severo | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4762/reactions', 'total_count': 1, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 1, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4762/timeline | null | null | true |
203 | https://api.github.com/repos/huggingface/datasets/issues/4761 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4761/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4761/comments | https://api.github.com/repos/huggingface/datasets/issues/4761/events | https://github.com/huggingface/datasets/issues/4761 | 1,321,068,411 | I_kwDODunzps5Oved7 | 4,761 | parallel searching in multi-gpu setting using faiss | {'login': 'xwwwwww', 'id': 48146603, 'node_id': 'MDQ6VXNlcjQ4MTQ2NjAz', 'avatar_url': 'https://avatars.githubusercontent.com/u/48146603?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/xwwwwww', 'html_url': 'https://github.com/xwwwwww', 'followers_url': 'https://api.github.com/users/xwwwwww/followers', 'following_url': 'https://api.github.com/users/xwwwwww/following{/other_user}', 'gists_url': 'https://api.github.com/users/xwwwwww/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/xwwwwww/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/xwwwwww/subscriptions', 'organizations_url': 'https://api.github.com/users/xwwwwww/orgs', 'repos_url': 'https://api.github.com/users/xwwwwww/repos', 'events_url': 'https://api.github.com/users/xwwwwww/events{/privacy}', 'received_events_url': 'https://api.github.com/users/xwwwwww/received_events', 'type': 'User', 'site_admin': False} | [] | open | false | null | [] | null | ["And I don't see any speed up when increasing the number of GPUs while calling `get_nearest_examples_batch`."
"Hi ! Yes search_batch uses FAISS search which happens in parallel across the GPUs\r\n\r\n> And I don't see any speed up when increasing the number of GPUs while calling get_nearest_examples_batch.\r\n\r\nThat's unexpected, can you share the code you're running ?"
'here is the code snippet\r\n\r\n```python\r\n\r\n# add faiss index\r\nsource_dataset = load_dataset(source_path)\r\nqueries = load_dataset(query_path)\r\ngpu = [0,1,2,3]\r\nsource_dataset.add_faiss_index(\r\n "embedding",\r\n device=gpu,\r\n )\r\n\r\n\r\n# batch query\r\nbatch_size = 32\r\nfor i in tqdm(range(0, len(queries), batch_size)):\r\n if i + batch_size >= len(queries):\r\n batched_queries = queries[i:]\r\n else:\r\n batched_queries = queries[i:i+batch_size]\r\n\r\n batched_query_embeddings = np.stack([i for i in batched_queries[\'embedding\']], axis=0)\r\n scores, candidates = source_dataset.get_nearest_examples_batch(\r\n "embedding",\r\n batched_query_embeddings,\r\n k=5\r\n )\r\n```'
'My version of datasets is `2.4.1.dev0`.'
'The code looks all good to me, do you see all the GPUs being utilized ? What version of faiss are you using ?'
'I can see the memory usage of all the GPUs.\r\nMy version of `faiss-gpu` is `1.7.2`'
"It looks all good to me then ^^ though you said you didn't experienced speed improvements by adding more GPUs ? What size is your source dataset and what time differences did you experience ?"
'query set: 1e6\r\nsource dataset: 1e6\r\nembedding size: 768\r\nindex: Flat\r\ntopk: 20\r\nGPU: V100\r\n\r\nThe time taken to traverse the query set once is about 1.5h, which is almost not influenced by the value of query batch size or the number of GPUs according to my experiments.'
"Hmmm the number of GPUs should divide the time, something is going wrong. Can you check that adding more GPU does divide the memory used per GPU ? Maybe it can be worth looking at similar issues in the FAISS repository or create a noew issue over there to understand what's going on"
'> Can you check that adding more GPU does divide the memory used per GPU \r\n\r\nThe memory used per GPU is unchanged while adding more GPU. Is this unexpected?\r\n\r\nI used to think that every GPU loads all the source vectors and the data parallelism is at the query level. 😆 '
"> I used to think that every GPU loads all the source vectors and the data parallelism is at the query level. 😆\r\n\r\nOh indeed that's possible, I wasn't sure. Anyway you can check that calling get_nearest_examples_batch simply calls search under the hood: \r\n\r\nhttps://github.com/huggingface/datasets/blob/f90f71fbbb33889fe75a3ffc101cdf16a88a3453/src/datasets/search.py#L375"
'Here is a runnable script. \r\nMulti-GPU searching still does not work in my experiments.\r\n\r\n\r\n```python\r\nimport os\r\nfrom tqdm import tqdm\r\nimport numpy as np\r\nimport datasets\r\nfrom datasets import Dataset\r\n\r\nclass DPRSelector:\r\n\r\n def __init__(self, source, target, index_name, gpu=None):\r\n self.source = source\r\n self.target = target\r\n self.index_name = index_name\r\n\r\n cache_path = \'embedding.faiss\'\r\n\r\n if not os.path.exists(cache_path):\r\n self.source.add_faiss_index(\r\n column="embedding",\r\n index_name=index_name,\r\n device=gpu,\r\n )\r\n self.source.save_faiss_index(index_name, cache_path)\r\n else:\r\n self.source.load_faiss_index(\r\n index_name,\r\n cache_path,\r\n device=gpu\r\n )\r\n print(\'index builded!\')\r\n\r\n def build_dataset(self, top_k, batch_size):\r\n print(\'start search\')\r\n\r\n for i in tqdm(range(0, len(self.target), batch_size)):\r\n if i + batch_size >= len(self.target):\r\n batched_queries = self.target[i:]\r\n else:\r\n batched_queries = self.target[i:i+batch_size]\r\n\r\n\r\n batched_query_embeddings = np.stack([i for i in batched_queries[\'embedding\']], axis=0)\r\n search_res = self.source.get_nearest_examples_batch(\r\n self.index_name,\r\n batched_query_embeddings,\r\n k=top_k\r\n )\r\n \r\n print(\'finish search\')\r\n\r\n\r\ndef get_pseudo_dataset():\r\n pseudo_dict = {"embedding": np.zeros((1000000, 768), dtype=np.float32)}\r\n print(\'generate pseudo data\')\r\n\r\n dataset = Dataset.from_dict(pseudo_dict)\r\n def list_to_array(data):\r\n return {"embedding": [np.array(vector, dtype=np.float32) for vector in data["embedding"]]} \r\n dataset.set_transform(list_to_array, columns=\'embedding\', output_all_columns=True)\r\n\r\n print(\'build dataset\')\r\n return dataset\r\n\r\n\r\n\r\nif __name__=="__main__":\r\n\r\n np.random.seed(42)\r\n\r\n\r\n source_dataset = get_pseudo_dataset()\r\n target_dataset = get_pseudo_dataset()\r\n\r\n gpu = [0,1,2,3,4,5,6,7]\r\n selector = DPRSelector(source_dataset, target_dataset, "embedding", gpu=gpu)\r\n\r\n selector.build_dataset(top_k=20, batch_size=32)\r\n```'
'@lhoestq Hi, could you please test the code above if you have time? 😄 '
"Maybe @albertvillanova you can take a look ? I won't be available in the following days"
'@albertvillanova Hi, can you help with this issue?'
"Hi @xwwwwww I'm investigating it, but I'm not an expert in Faiss. In principle, it is weird that your code does not work properly because it seems right..."
'Have you tried passing `gpu=-1` and check if there is a speedup?'
'> Have you tried passing `gpu=-1` and check if there is a speedup?\r\n\r\nyes, there is a speed up using GPU compared with CPU. '
"When passing `device=-1`, ALL existing GPUs are used (multi GPU): this is the maximum speedup you can get. To know the number of total GPUs:\r\n```\r\nimport faiss\r\n\r\nngpus = faiss.get_num_gpus()\r\nprint(ngpus)\r\n```\r\n\r\nWhen passing a list of integers to `device`, then only that number of GPUs are used (multi GPU as well)\r\n- the speedup should be proportional (more or less) to the ratio of the number of elements passed to `device` over `ngpus`\r\n- if this is not the case, then there is an issue in the implementation of this use case (however, I have reviewed the code and in principle I can't find any evident bug)\r\n\r\nWhen passing a positive integer to `device`, then only a single GPU is used.\r\n- this time should be more or less proportional to the time when passing `device=-1` over `ngpus`"
'Thanks for your help!\r\nHave you run the code and replicated the same experimental results (i.e., no speedup while increasing the number of GPUs)?'
'@albertvillanova @lhoestq Sorry for the bother, is there any progress on this issue? 😃 '
'I can confirm `add_faiss_index` calls `index = faiss.index_cpu_to_gpus_list(index, gpus=list(device))`.\r\n\r\nCould this be an issue with your environment ? Could you try running with 1 and 8 GPUs with a code similar to[ this one from the FAISS examples](https://github.com/facebookresearch/faiss/blob/main/tutorial/python/5-Multiple-GPUs.py) but using `gpu_index = faiss.index_cpu_to_gpus_list(cpu_index, gpus=list(device))`, and see if the speed changes ?'] | 2022-07-28 14:57:03 | 2022-08-18 14:59:10 | null | CONTRIBUTOR | null | null | null | While I notice that `add_faiss_index` has supported assigning multiple GPUs, I am still confused about how it works.
Does the `search-batch` function automatically parallelizes the input queries to different gpus?https://github.com/huggingface/datasets/blob/d76599bdd4d186b2e7c4f468b05766016055a0a5/src/datasets/search.py#L360 | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4761/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4761/timeline | null | null | false |
204 | https://api.github.com/repos/huggingface/datasets/issues/4760 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4760/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4760/comments | https://api.github.com/repos/huggingface/datasets/issues/4760/events | https://github.com/huggingface/datasets/issues/4760 | 1,320,878,223 | I_kwDODunzps5OuwCP | 4,760 | Issue with offline mode | {'login': 'SaulLu', 'id': 55560583, 'node_id': 'MDQ6VXNlcjU1NTYwNTgz', 'avatar_url': 'https://avatars.githubusercontent.com/u/55560583?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/SaulLu', 'html_url': 'https://github.com/SaulLu', 'followers_url': 'https://api.github.com/users/SaulLu/followers', 'following_url': 'https://api.github.com/users/SaulLu/following{/other_user}', 'gists_url': 'https://api.github.com/users/SaulLu/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/SaulLu/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/SaulLu/subscriptions', 'organizations_url': 'https://api.github.com/users/SaulLu/orgs', 'repos_url': 'https://api.github.com/users/SaulLu/repos', 'events_url': 'https://api.github.com/users/SaulLu/events{/privacy}', 'received_events_url': 'https://api.github.com/users/SaulLu/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | open | false | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}] | null | ["Hi @SaulLu, thanks for reporting.\r\n\r\nI think offline mode is not supported for datasets containing only data files (without any loading script). I'm having a look into this..."
"Thanks for your feedback! \r\n\r\nTo give you a little more info, if you don't set the offline mode flag, the script will load the cache. I first noticed this behavior with the `evaluate` library, and while trying to understand the downloading flow I realized that I had a similar error with datasets."
'This is an issue we have to fix.'
'This is related to https://github.com/huggingface/datasets/issues/3547'] | 2022-07-28 12:45:14 | 2022-07-28 16:05:36 | null | NONE | null | null | null | ## Describe the bug
I can't retrieve a cached dataset with offline mode enabled
## Steps to reproduce the bug
To reproduce my issue, first, you'll need to run a script that will cache the dataset
```python
import os
os.environ["HF_DATASETS_OFFLINE"] = "0"
import datasets
datasets.logging.set_verbosity_info()
ds_name = "SaulLu/toy_struc_dataset"
ds = datasets.load_dataset(ds_name)
print(ds)
```
then, you can try to reload it in offline mode:
```python
import os
os.environ["HF_DATASETS_OFFLINE"] = "1"
import datasets
datasets.logging.set_verbosity_info()
ds_name = "SaulLu/toy_struc_dataset"
ds = datasets.load_dataset(ds_name)
print(ds)
```
## Expected results
I would have expected the 2nd snippet not to return any errors
## Actual results
The 2nd snippet returns:
```
Traceback (most recent call last):
File "/home/lucile_huggingface_co/sandbox/evaluate/test_cache_datasets.py", line 8, in <module>
ds = datasets.load_dataset(ds_name)
File "/home/lucile_huggingface_co/anaconda3/envs/evaluate-dev/lib/python3.8/site-packages/datasets/load.py", line 1723, in load_dataset
builder_instance = load_dataset_builder(
File "/home/lucile_huggingface_co/anaconda3/envs/evaluate-dev/lib/python3.8/site-packages/datasets/load.py", line 1500, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/lucile_huggingface_co/anaconda3/envs/evaluate-dev/lib/python3.8/site-packages/datasets/load.py", line 1241, in dataset_module_factory
raise ConnectionError(f"Couln't reach the Hugging Face Hub for dataset '{path}': {e1}") from None
ConnectionError: Couln't reach the Hugging Face Hub for dataset 'SaulLu/toy_struc_dataset': Offline mode is enabled.
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.17
- Python version: 3.8.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
Maybe I'm misunderstanding something in the use of the offline mode (see [doc](https://huggingface.co/docs/datasets/v2.4.0/en/loading#offline)), is that the case?
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4760/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4760/timeline | null | null | false |
205 | https://api.github.com/repos/huggingface/datasets/issues/4759 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4759/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4759/comments | https://api.github.com/repos/huggingface/datasets/issues/4759/events | https://github.com/huggingface/datasets/issues/4759 | 1,320,783,300 | I_kwDODunzps5OuY3E | 4,759 | Dataset Viewer issue for Toygar/turkish-offensive-language-detection | {'login': 'Toygarr', 'id': 44132720, 'node_id': 'MDQ6VXNlcjQ0MTMyNzIw', 'avatar_url': 'https://avatars.githubusercontent.com/u/44132720?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/Toygarr', 'html_url': 'https://github.com/Toygarr', 'followers_url': 'https://api.github.com/users/Toygarr/followers', 'following_url': 'https://api.github.com/users/Toygarr/following{/other_user}', 'gists_url': 'https://api.github.com/users/Toygarr/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/Toygarr/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/Toygarr/subscriptions', 'organizations_url': 'https://api.github.com/users/Toygarr/orgs', 'repos_url': 'https://api.github.com/users/Toygarr/repos', 'events_url': 'https://api.github.com/users/Toygarr/events{/privacy}', 'received_events_url': 'https://api.github.com/users/Toygarr/received_events', 'type': 'User', 'site_admin': False} | [{'id': 3470211881, 'node_id': 'LA_kwDODunzps7O1zsp', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer', 'name': 'dataset-viewer', 'color': 'E5583E', 'default': False, 'description': 'Related to the dataset viewer on huggingface.co'}] | closed | false | {'login': 'severo', 'id': 1676121, 'node_id': 'MDQ6VXNlcjE2NzYxMjE=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1676121?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/severo', 'html_url': 'https://github.com/severo', 'followers_url': 'https://api.github.com/users/severo/followers', 'following_url': 'https://api.github.com/users/severo/following{/other_user}', 'gists_url': 'https://api.github.com/users/severo/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/severo/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/severo/subscriptions', 'organizations_url': 'https://api.github.com/users/severo/orgs', 'repos_url': 'https://api.github.com/users/severo/repos', 'events_url': 'https://api.github.com/users/severo/events{/privacy}', 'received_events_url': 'https://api.github.com/users/severo/received_events', 'type': 'User', 'site_admin': False} | [{'login': 'severo', 'id': 1676121, 'node_id': 'MDQ6VXNlcjE2NzYxMjE=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1676121?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/severo', 'html_url': 'https://github.com/severo', 'followers_url': 'https://api.github.com/users/severo/followers', 'following_url': 'https://api.github.com/users/severo/following{/other_user}', 'gists_url': 'https://api.github.com/users/severo/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/severo/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/severo/subscriptions', 'organizations_url': 'https://api.github.com/users/severo/orgs', 'repos_url': 'https://api.github.com/users/severo/repos', 'events_url': 'https://api.github.com/users/severo/events{/privacy}', 'received_events_url': 'https://api.github.com/users/severo/received_events', 'type': 'User', 'site_admin': False}] | null | ['I refreshed the dataset viewer manually, it\'s fixed now. Sorry for the inconvenience.\r\n<img width="1557" alt="Capture d’écran 2022-07-28 à 09 17 39" src="https://user-images.githubusercontent.com/1676121/181514666-92d7f8e1-ddc1-4769-84f3-f1edfdb902e8.png">\r\n\r\n'] | 2022-07-28 11:21:43 | 2022-07-28 13:17:56 | 2022-07-28 13:17:48 | NONE | null | null | null | ### Link
https://huggingface.co/datasets/Toygar/turkish-offensive-language-detection
### Description
Status code: 400
Exception: Status400Error
Message: The dataset does not exist.
Hi, I provided train.csv, test.csv and valid.csv files. However, viewer says dataset does not exist.
Should I need to do anything else?
### Owner
Yes | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4759/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4759/timeline | null | completed | false |
206 | https://api.github.com/repos/huggingface/datasets/issues/4757 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4757/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4757/comments | https://api.github.com/repos/huggingface/datasets/issues/4757/events | https://github.com/huggingface/datasets/issues/4757 | 1,320,602,532 | I_kwDODunzps5Otsuk | 4,757 | Document better when relative paths are transformed to URLs | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892861, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODYx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/documentation', 'name': 'documentation', 'color': '0075ca', 'default': True, 'description': 'Improvements or additions to documentation'}] | open | false | {'login': 'stevhliu', 'id': 59462357, 'node_id': 'MDQ6VXNlcjU5NDYyMzU3', 'avatar_url': 'https://avatars.githubusercontent.com/u/59462357?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/stevhliu', 'html_url': 'https://github.com/stevhliu', 'followers_url': 'https://api.github.com/users/stevhliu/followers', 'following_url': 'https://api.github.com/users/stevhliu/following{/other_user}', 'gists_url': 'https://api.github.com/users/stevhliu/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/stevhliu/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/stevhliu/subscriptions', 'organizations_url': 'https://api.github.com/users/stevhliu/orgs', 'repos_url': 'https://api.github.com/users/stevhliu/repos', 'events_url': 'https://api.github.com/users/stevhliu/events{/privacy}', 'received_events_url': 'https://api.github.com/users/stevhliu/received_events', 'type': 'User', 'site_admin': False} | [{'login': 'stevhliu', 'id': 59462357, 'node_id': 'MDQ6VXNlcjU5NDYyMzU3', 'avatar_url': 'https://avatars.githubusercontent.com/u/59462357?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/stevhliu', 'html_url': 'https://github.com/stevhliu', 'followers_url': 'https://api.github.com/users/stevhliu/followers', 'following_url': 'https://api.github.com/users/stevhliu/following{/other_user}', 'gists_url': 'https://api.github.com/users/stevhliu/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/stevhliu/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/stevhliu/subscriptions', 'organizations_url': 'https://api.github.com/users/stevhliu/orgs', 'repos_url': 'https://api.github.com/users/stevhliu/repos', 'events_url': 'https://api.github.com/users/stevhliu/events{/privacy}', 'received_events_url': 'https://api.github.com/users/stevhliu/received_events', 'type': 'User', 'site_admin': False}] | null | [] | 2022-07-28 08:46:27 | 2022-08-01 09:06:44 | null | MEMBER | null | null | null | As discussed with @ydshieh, when passing a relative path as `data_dir` to `load_dataset` of a dataset hosted on the Hub, the relative path is transformed to the corresponding URL of the Hub dataset.
Currently, we mention this in our docs here: [Create a dataset loading script > Download data files and organize splits](https://huggingface.co/docs/datasets/v2.4.0/en/dataset_script#download-data-files-and-organize-splits)
> If the data files live in the same folder or repository of the dataset script, you can just pass the relative paths to the files instead of URLs.
Maybe we should document better how relative paths are handled, not only when creating a dataset loading script, but also when passing to `load_dataset`:
- `data_dir`
- `data_files`
CC: @stevhliu | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4757/reactions', 'total_count': 1, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4757/timeline | null | null | false |
207 | https://api.github.com/repos/huggingface/datasets/issues/4755 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4755/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4755/comments | https://api.github.com/repos/huggingface/datasets/issues/4755/events | https://github.com/huggingface/datasets/issues/4755 | 1,319,687,044 | I_kwDODunzps5OqNOE | 4,755 | Datasets.map causes incorrect overflow_to_sample_mapping when used with tokenizers and small batch size | {'login': 'srobertjames', 'id': 662612, 'node_id': 'MDQ6VXNlcjY2MjYxMg==', 'avatar_url': 'https://avatars.githubusercontent.com/u/662612?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/srobertjames', 'html_url': 'https://github.com/srobertjames', 'followers_url': 'https://api.github.com/users/srobertjames/followers', 'following_url': 'https://api.github.com/users/srobertjames/following{/other_user}', 'gists_url': 'https://api.github.com/users/srobertjames/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/srobertjames/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/srobertjames/subscriptions', 'organizations_url': 'https://api.github.com/users/srobertjames/orgs', 'repos_url': 'https://api.github.com/users/srobertjames/repos', 'events_url': 'https://api.github.com/users/srobertjames/events{/privacy}', 'received_events_url': 'https://api.github.com/users/srobertjames/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | open | false | null | [] | null | ["I've built a minimal example that shows this bug without `n_proc`. It seems like it's a problem any way of using **tokenizers, `overflow_to_sample_mapping`, and Dataset.map, with a small batch size**:\r\n\r\n```\r\nimport datasets\r\nimport transformers\r\npretrained = 'deepset/tinyroberta-squad2'\r\ntokenizer = transformers.AutoTokenizer.from_pretrained(pretrained)\r\n\r\nquestions = ['Can you tell me why?', 'What time is it?']\r\ncontexts = ['This is context zero', 'Another paragraph goes here'] \r\n\r\ndef tok(questions, contexts):\r\n return tokenizer(text=questions,\r\n text_pair=contexts,\r\n truncation='only_second',\r\n return_overflowing_tokens=True,\r\n )\r\nprint(tok(questions, contexts)['overflow_to_sample_mapping'])\r\nassert tok(questions, contexts)['overflow_to_sample_mapping'] == [0, 1] # PASSES\r\n\r\ndef tok2(d):\r\n return tok(d['question'], d['context'])\r\n\r\ndef tok2(d):\r\n return tok(d['question'], d['context'])\r\n\r\nds = datasets.Dataset.from_dict({'question': questions, 'context': contexts})\r\ntokens = ds.map(tok2, batched=True, batch_size=1)\r\nprint(tokens['overflow_to_sample_mapping'])\r\nassert tokens['overflow_to_sample_mapping'] == [0, 1] # FAILS produces [0,0]\r\n```\r\n\r\nNote that even if the batch size would be larger, there will be instances where we will not have a lot of data, and end up using small batches. This can occur e.g. if `n_proc` causes batches to be underfill. I imagine it can also occur in other ways, e.g. the final leftover batch at the end."
"A larger batch size does _not_ have this behavior:\r\n\r\n```\r\ndef tok2(d):\r\n return tok(d['question'], d['context'])\r\n\r\nds = datasets.Dataset.from_dict({'question': questions, 'context': contexts})\r\ntokens = ds.map(tok2, batched=True, batch_size=2)\r\nprint(tokens['overflow_to_sample_mapping'])\r\nassert tokens['overflow_to_sample_mapping'] == [0, 1] # PASSES\r\n```"] | 2022-07-27 14:54:11 | 2022-07-27 17:57:28 | null | NONE | null | null | null | ## Describe the bug
When using `tokenizer`, we can retrieve the field `overflow_to_sample_mapping`, since long samples will be overflown into multiple token sequences.
However, when tokenizing is done via `Dataset.map`, with `n_proc > 1`, the `overflow_to_sample_mapping` field is wrong. This seems to be because each tokenizer only looks at its share of the samples, and maps to the index _within its share_, but then `Dataset.map` collates them together.
## Steps to reproduce the bug
1. Make a dataset of 3 strings.
2. Tokenize via Dataset.map with n_proc = 8
3. Inspect the `overflow_to_sample_mapping` field
## Expected results
`[0, 1, 2]`
## Actual results
`[0, 0, 0]`
Notes:
1. I have not yet extracted a minimal example, but the above works reliably
2. If the dataset is large, I've yet to determine if this bug still happens a. not at all b. always c. on the small, leftover batch at the end.
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4755/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4755/timeline | null | null | false |
208 | https://api.github.com/repos/huggingface/datasets/issues/4754 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4754/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4754/comments | https://api.github.com/repos/huggingface/datasets/issues/4754/events | https://github.com/huggingface/datasets/pull/4754 | 1,319,681,541 | PR_kwDODunzps48L9p6 | 4,754 | Remove "unkown" language tags | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-27 14:50:12 | 2022-07-27 15:03:00 | 2022-07-27 14:51:06 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4754', 'html_url': 'https://github.com/huggingface/datasets/pull/4754', 'diff_url': 'https://github.com/huggingface/datasets/pull/4754.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4754.patch', 'merged_at': datetime.datetime(2022, 7, 27, 14, 51, 6)} | Following https://github.com/huggingface/datasets/pull/4753 there was still a "unknown" langauge tag in `wikipedia` so the job at https://github.com/huggingface/datasets/runs/7542567336?check_suite_focus=true failed for wikipedia | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4754/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4754/timeline | null | null | true |
209 | https://api.github.com/repos/huggingface/datasets/issues/4753 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4753/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4753/comments | https://api.github.com/repos/huggingface/datasets/issues/4753/events | https://github.com/huggingface/datasets/pull/4753 | 1,319,571,745 | PR_kwDODunzps48Ll8G | 4,753 | Add `language_bcp47` tag | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-27 13:31:16 | 2022-07-27 14:50:03 | 2022-07-27 14:37:56 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4753', 'html_url': 'https://github.com/huggingface/datasets/pull/4753', 'diff_url': 'https://github.com/huggingface/datasets/pull/4753.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4753.patch', 'merged_at': datetime.datetime(2022, 7, 27, 14, 37, 56)} | Following (internal) https://github.com/huggingface/moon-landing/pull/3509, we need to move the bcp47 tags to `language_bcp47` and keep the `language` tag for iso 639 1-2-3 codes. In particular I made sure that all the tags in `languages` are not longer than 3 characters. I moved the rest to `language_bcp47` and fixed some of them.
After this PR is merged I think we can simplify the language validation from the DatasetMetadata class (and keep it bare-bone just for the tagging app)
PS: the CI is failing because of missing content in dataset cards that are unrelated to this PR | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4753/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4753/timeline | null | null | true |
210 | https://api.github.com/repos/huggingface/datasets/issues/4752 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4752/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4752/comments | https://api.github.com/repos/huggingface/datasets/issues/4752/events | https://github.com/huggingface/datasets/issues/4752 | 1,319,464,409 | I_kwDODunzps5OpW3Z | 4,752 | DatasetInfo issue when testing multiple configs: mixed task_templates | {'login': 'BramVanroy', 'id': 2779410, 'node_id': 'MDQ6VXNlcjI3Nzk0MTA=', 'avatar_url': 'https://avatars.githubusercontent.com/u/2779410?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/BramVanroy', 'html_url': 'https://github.com/BramVanroy', 'followers_url': 'https://api.github.com/users/BramVanroy/followers', 'following_url': 'https://api.github.com/users/BramVanroy/following{/other_user}', 'gists_url': 'https://api.github.com/users/BramVanroy/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/BramVanroy/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/BramVanroy/subscriptions', 'organizations_url': 'https://api.github.com/users/BramVanroy/orgs', 'repos_url': 'https://api.github.com/users/BramVanroy/repos', 'events_url': 'https://api.github.com/users/BramVanroy/events{/privacy}', 'received_events_url': 'https://api.github.com/users/BramVanroy/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | open | false | null | [] | null | ["I've narrowed down the issue to the `dataset_module_factory` which already creates a `dataset_infos.json` file down in the `.cache/modules/dataset_modules/..` folder. That JSON file already contains the wrong task_templates for `unfiltered`."
'Ugh. Found the issue: apparently `datasets` was reusing the already existing `dataset_infos.json` that is inside `datasets/datasets/hebban-reviews`! Is this desired behavior?\r\n\r\nPerhaps when `--save_infos` and `--all_configs` are given, an existing `dataset_infos.json` file should first be deleted before continuing with the test? Because that would assume that the user wants to create a new infos file for all configs anyway.'
'Hi! I think this is a reasonable solution. Would you be interested in submitting a PR?'] | 2022-07-27 12:04:54 | 2022-08-08 18:20:50 | null | CONTRIBUTOR | null | null | null | ## Describe the bug
When running the `datasets-cli test` it would seem that some config properties in a DatasetInfo get mangled, leading to issues, e.g., about the ClassLabel.
## Steps to reproduce the bug
In summary, what I want to do is create three configs:
- unfiltered: no classlabel, no tasks. Gets data from unfiltered.json.gz (I'd want this without splits, just one chunk of data, but that does not seem possible?)
- filtered_sentiment: `review_sentiment` as ClassLabel, TextClassification task with `review_sentiment` as label. Gets train/test split from respective json.gz files
- filtered_rating: `review_rating0` as ClassLabel, TextClassification task with `review_rating0` as label. Gets train/test split from respective json.gz files
This might be a bit tedious to reproduce, so I am sorry, but these are the steps:
- Clone datasets -> `datasets/` and install it
- Clone `https://huggingface.co/datasets/BramVanroy/hebban-reviews` into `datasets/datasets` so that you have a new folder `datasets/datasets/hebban-reviews/`.
- Replace the HebbanReviews class with this new one:
```python
class HebbanReviews(datasets.GeneratorBasedBuilder):
"""The Hebban book reviews dataset."""
BUILDER_CONFIGS = [
HebbanReviewsConfig(
name="unfiltered",
description=_HEBBAN_REVIEWS_UNFILTERED_DESCRIPTION,
version=datasets.Version(_HEBBAN_VERSION)
),
HebbanReviewsConfig(
name="filtered_sentiment",
description=f"This config has the negative, neutral, and positive sentiment scores as ClassLabel in the 'review_sentiment' column.\n{_HEBBAN_REVIEWS_FILTERED_DESCRIPTION}",
version=datasets.Version(_HEBBAN_VERSION)
),
HebbanReviewsConfig(
name="filtered_rating",
description=f"This config has the 5-class ratings as ClassLabel in the 'review_rating0' column (which is a variant of 'review_rating' that starts counting from 0 instead of 1).\n{_HEBBAN_REVIEWS_FILTERED_DESCRIPTION}",
version=datasets.Version(_HEBBAN_VERSION)
)
]
DEFAULT_CONFIG_NAME = "filtered_sentiment"
_URLS = {
"train": "train.jsonl.gz",
"test": "test.jsonl.gz",
"unfiltered": "unfiltered.jsonl.gz",
}
def _info(self):
features = {
"review_title": datasets.Value("string"),
"review_text": datasets.Value("string"),
"review_text_without_quotes": datasets.Value("string"),
"review_n_quotes": datasets.Value("int32"),
"review_n_tokens": datasets.Value("int32"),
"review_rating": datasets.Value("int32"),
"review_rating0": datasets.Value("int32"),
"review_author_url": datasets.Value("string"),
"review_author_type": datasets.Value("string"),
"review_n_likes": datasets.Value("int32"),
"review_n_comments": datasets.Value("int32"),
"review_url": datasets.Value("string"),
"review_published_date": datasets.Value("string"),
"review_crawl_date": datasets.Value("string"),
"lid": datasets.Value("string"),
"lid_probability": datasets.Value("float32"),
"review_sentiment": datasets.features.ClassLabel(names=["negative", "neutral", "positive"]),
"review_sentiment_label": datasets.Value("string"),
"book_id": datasets.Value("int32"),
}
if self.config.name == "filtered_sentiment":
task_templates = [datasets.TextClassification(text_column="review_text_without_quotes", label_column="review_sentiment")]
elif self.config.name == "filtered_rating":
# For CrossEntropy, our classes need to start at index 0 -- not 1
features["review_rating0"] = datasets.features.ClassLabel(names=["1", "2", "3", "4", "5"])
features["review_sentiment"] = datasets.Value("int32")
task_templates = [datasets.TextClassification(text_column="review_text_without_quotes", label_column="review_rating0")]
elif self.config.name == "unfiltered": # no ClassLabels in unfiltered
features["review_sentiment"] = datasets.Value("int32")
task_templates = None
else:
raise ValueError(f"Unsupported config {self.config.name}. Expected one of 'filtered_sentiment' (default),"
f" 'filtered_rating', or 'unfiltered'")
print("AT INFO", self.config.name, task_templates)
return datasets.DatasetInfo(
description=self.config.description,
features=datasets.Features(features),
homepage="https://huggingface.co/datasets/BramVanroy/hebban-reviews",
citation=_HEBBAN_REVIEWS_CITATION,
task_templates=task_templates,
license="cc-by-4.0"
)
def _split_generators(self, dl_manager):
if self.config.name.startswith("filtered"):
files = dl_manager.download_and_extract({"train": "train.jsonl.gz",
"test": "test.jsonl.gz"})
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
"data_file": files["train"]
},
),
datasets.SplitGenerator(
name=datasets.Split.TEST,
gen_kwargs={
"data_file": files["test"]
},
),
]
elif self.config.name == "unfiltered":
files = dl_manager.download_and_extract({"train": "unfiltered.jsonl.gz"})
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
"data_file": files["train"]
},
),
]
else:
raise ValueError(f"Unsupported config {self.config.name}. Expected one of 'filtered_sentiment' (default),"
f" 'filtered_rating', or 'unfiltered'")
def _generate_examples(self, data_file):
lines = Path(data_file).open(encoding="utf-8").readlines()
for line_idx, line in enumerate(lines):
row = json.loads(line)
yield line_idx, row
```
- finally, run `datasets-cli test ./datasets/hebban-reviews/ --save_infos --all_configs` from within the topmost `datasets` directory
## Expected results
Succeeding tests for three different configs.
## Actual results
I printed out the values that are given to `DatasetInfo` for config name and task_templates, as you can see. There, as expected, I get `unfiltered None`. I also modified datasets/info.py and added this line [at L.170](https://github.com/huggingface/datasets/blob/f5847a304aa1b38b3a3c54a8318b4df60f1299bc/src/datasets/info.py#L170):
```python
print("INTERNALLY AT INFO.PY", self.config_name, self.task_templates)
```
to my surprise, here I get `unfiltered [TextClassification(task='text-classification', text_column='review_text_without_quotes', label_column='review_sentiment')]`. So one way or another, here I suddenly see that `unfiltered` now does have a task_template -- even though that is not what is written in the data loading script, as the first print statement correctly shows.
I do not quite understand how, but it seems that the config name and task_templates get mixed.
This ultimately leads to the following error, but this trace may not be very useful in itself:
```
Traceback (most recent call last):
File "C:\Users\bramv\.virtualenvs\hebban-U6poXNQd\Scripts\datasets-cli-script.py", line 33, in <module>
sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')())
File "c:\dev\python\hebban\datasets\src\datasets\commands\datasets_cli.py", line 39, in main
service.run()
File "c:\dev\python\hebban\datasets\src\datasets\commands\test.py", line 144, in run
builder.as_dataset()
File "c:\dev\python\hebban\datasets\src\datasets\builder.py", line 899, in as_dataset
datasets = map_nested(
File "c:\dev\python\hebban\datasets\src\datasets\utils\py_utils.py", line 393, in map_nested
mapped = [
File "c:\dev\python\hebban\datasets\src\datasets\utils\py_utils.py", line 394, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "c:\dev\python\hebban\datasets\src\datasets\utils\py_utils.py", line 330, in _single_map_nested
return function(data_struct)
File "c:\dev\python\hebban\datasets\src\datasets\builder.py", line 930, in _build_single_dataset
ds = self._as_dataset(
File "c:\dev\python\hebban\datasets\src\datasets\builder.py", line 1006, in _as_dataset
return Dataset(fingerprint=fingerprint, **dataset_kwargs)
File "c:\dev\python\hebban\datasets\src\datasets\arrow_dataset.py", line 661, in __init__
info = info.copy() if info is not None else DatasetInfo()
File "c:\dev\python\hebban\datasets\src\datasets\info.py", line 286, in copy
return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
File "<string>", line 20, in __init__
File "c:\dev\python\hebban\datasets\src\datasets\info.py", line 176, in __post_init__
self.task_templates = [
File "c:\dev\python\hebban\datasets\src\datasets\info.py", line 177, in <listcomp>
template.align_with_features(self.features) for template in (self.task_templates)
File "c:\dev\python\hebban\datasets\src\datasets\tasks\text_classification.py", line 22, in align_with_features
raise ValueError(f"Column {self.label_column} is not a ClassLabel.")
ValueError: Column review_sentiment is not a ClassLabel.
```
## Environment info
- `datasets` version: 2.4.1.dev0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.8
- PyArrow version: 8.0.0
- Pandas version: 1.4.3 | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4752/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4752/timeline | null | null | false |
211 | https://api.github.com/repos/huggingface/datasets/issues/4751 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4751/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4751/comments | https://api.github.com/repos/huggingface/datasets/issues/4751/events | https://github.com/huggingface/datasets/pull/4751 | 1,319,440,903 | PR_kwDODunzps48LJ7U | 4,751 | Added dataset information in clinic oos dataset card | {'login': 'Arnav-Ladkat', 'id': 84362194, 'node_id': 'MDQ6VXNlcjg0MzYyMTk0', 'avatar_url': 'https://avatars.githubusercontent.com/u/84362194?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/Arnav-Ladkat', 'html_url': 'https://github.com/Arnav-Ladkat', 'followers_url': 'https://api.github.com/users/Arnav-Ladkat/followers', 'following_url': 'https://api.github.com/users/Arnav-Ladkat/following{/other_user}', 'gists_url': 'https://api.github.com/users/Arnav-Ladkat/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/Arnav-Ladkat/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/Arnav-Ladkat/subscriptions', 'organizations_url': 'https://api.github.com/users/Arnav-Ladkat/orgs', 'repos_url': 'https://api.github.com/users/Arnav-Ladkat/repos', 'events_url': 'https://api.github.com/users/Arnav-Ladkat/events{/privacy}', 'received_events_url': 'https://api.github.com/users/Arnav-Ladkat/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-27 11:44:28 | 2022-07-28 10:53:21 | 2022-07-28 10:40:37 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4751', 'html_url': 'https://github.com/huggingface/datasets/pull/4751', 'diff_url': 'https://github.com/huggingface/datasets/pull/4751.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4751.patch', 'merged_at': datetime.datetime(2022, 7, 28, 10, 40, 37)} | This PR aims to add relevant information like the Description, Language and citation information of the clinic oos dataset card. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4751/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4751/timeline | null | null | true |
212 | https://api.github.com/repos/huggingface/datasets/issues/4750 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4750/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4750/comments | https://api.github.com/repos/huggingface/datasets/issues/4750/events | https://github.com/huggingface/datasets/issues/4750 | 1,319,333,645 | I_kwDODunzps5Oo28N | 4,750 | Easily create loading script for benchmark comprising multiple huggingface datasets | {'login': 'JoelNiklaus', 'id': 3775944, 'node_id': 'MDQ6VXNlcjM3NzU5NDQ=', 'avatar_url': 'https://avatars.githubusercontent.com/u/3775944?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/JoelNiklaus', 'html_url': 'https://github.com/JoelNiklaus', 'followers_url': 'https://api.github.com/users/JoelNiklaus/followers', 'following_url': 'https://api.github.com/users/JoelNiklaus/following{/other_user}', 'gists_url': 'https://api.github.com/users/JoelNiklaus/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/JoelNiklaus/subscriptions', 'organizations_url': 'https://api.github.com/users/JoelNiklaus/orgs', 'repos_url': 'https://api.github.com/users/JoelNiklaus/repos', 'events_url': 'https://api.github.com/users/JoelNiklaus/events{/privacy}', 'received_events_url': 'https://api.github.com/users/JoelNiklaus/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['Hi ! I think the simplest is to copy paste the `_split_generators` code from the other datasets and do a bunch of if-else, as in the glue dataset: https://huggingface.co/datasets/glue/blob/main/glue.py#L467'
'Ok, I see. Thank you'] | 2022-07-27 10:13:38 | 2022-07-27 13:58:07 | 2022-07-27 13:58:07 | CONTRIBUTOR | null | null | null | Hi,
I would like to create a loading script for a benchmark comprising multiple huggingface datasets.
The function _split_generators needs to return the files for the respective dataset. However, the files are not always in the same location for each dataset. I want to just make a wrapper dataset that provides a single interface to all the underlying datasets.
I thought about downloading the files with the load_dataset function and then providing the link to the cached file. But this seems a bit inelegant to me. What approach would you propose to do this?
Please let me know if you have any questions.
Cheers,
Joel | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4750/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4750/timeline | null | completed | false |
213 | https://api.github.com/repos/huggingface/datasets/issues/4748 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4748/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4748/comments | https://api.github.com/repos/huggingface/datasets/issues/4748/events | https://github.com/huggingface/datasets/pull/4748 | 1,318,874,913 | PR_kwDODunzps48JTEb | 4,748 | Add image classification processing guide | {'login': 'stevhliu', 'id': 59462357, 'node_id': 'MDQ6VXNlcjU5NDYyMzU3', 'avatar_url': 'https://avatars.githubusercontent.com/u/59462357?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/stevhliu', 'html_url': 'https://github.com/stevhliu', 'followers_url': 'https://api.github.com/users/stevhliu/followers', 'following_url': 'https://api.github.com/users/stevhliu/following{/other_user}', 'gists_url': 'https://api.github.com/users/stevhliu/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/stevhliu/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/stevhliu/subscriptions', 'organizations_url': 'https://api.github.com/users/stevhliu/orgs', 'repos_url': 'https://api.github.com/users/stevhliu/repos', 'events_url': 'https://api.github.com/users/stevhliu/events{/privacy}', 'received_events_url': 'https://api.github.com/users/stevhliu/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892861, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODYx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/documentation', 'name': 'documentation', 'color': '0075ca', 'default': True, 'description': 'Improvements or additions to documentation'}] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-27 00:11:11 | 2022-07-27 17:28:21 | 2022-07-27 17:16:12 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4748', 'html_url': 'https://github.com/huggingface/datasets/pull/4748', 'diff_url': 'https://github.com/huggingface/datasets/pull/4748.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4748.patch', 'merged_at': datetime.datetime(2022, 7, 27, 17, 16, 12)} | This PR follows up on #4710 to separate the object detection and image classification guides. It expands a little more on the original guide to include a more complete example of loading and transforming a whole dataset. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4748/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4748/timeline | null | null | true |
214 | https://api.github.com/repos/huggingface/datasets/issues/4747 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4747/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4747/comments | https://api.github.com/repos/huggingface/datasets/issues/4747/events | https://github.com/huggingface/datasets/pull/4747 | 1,318,586,932 | PR_kwDODunzps48IWKj | 4,747 | Shard parquet in `download_and_prepare` | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False} | [] | open | false | null | [] | null | ['The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4747). All of your documentation changes will be reflected on that endpoint.'] | 2022-07-26 18:05:01 | 2022-07-29 14:19:31 | null | MEMBER | null | true | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4747', 'html_url': 'https://github.com/huggingface/datasets/pull/4747', 'diff_url': 'https://github.com/huggingface/datasets/pull/4747.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4747.patch', 'merged_at': None} | Following https://github.com/huggingface/datasets/pull/4724 (needs to be merged first)
It's good practice to shard parquet files to enable parallelism with spark/dask/etc.
I added the `max_shard_size` parameter to `download_and_prepare` (default to 500MB for parquet, and None for arrow).
```python
from datasets import *
cache_dir = "s3://..."
builder = load_dataset_builder("squad", cache_dir=cache_dir)
builder.download_and_prepare(file_format="parquet", max_shard_size="5MB")
```
### Implementation details
The examples are written to a parquet file until `ParquetWriter._num_bytes > max_shard_size`. When this happens, a new writer is instantiated to start writing the next shard. At the end, all the shards are renamed to include the total number of shards in their names: `{builder.name}-{split}-{shard_id:05d}-of-{num_shards:05d}.parquet`
TODO:
- [x] docstrings
- [x] docs
- [x] tests
cc @severo | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4747/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4747/timeline | null | null | true |
215 | https://api.github.com/repos/huggingface/datasets/issues/4746 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4746/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4746/comments | https://api.github.com/repos/huggingface/datasets/issues/4746/events | https://github.com/huggingface/datasets/issues/4746 | 1,318,486,599 | I_kwDODunzps5OloJH | 4,746 | Dataset Viewer issue for yanekyuk/wikikey | {'login': 'ai-ashok', 'id': 91247690, 'node_id': 'MDQ6VXNlcjkxMjQ3Njkw', 'avatar_url': 'https://avatars.githubusercontent.com/u/91247690?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/ai-ashok', 'html_url': 'https://github.com/ai-ashok', 'followers_url': 'https://api.github.com/users/ai-ashok/followers', 'following_url': 'https://api.github.com/users/ai-ashok/following{/other_user}', 'gists_url': 'https://api.github.com/users/ai-ashok/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/ai-ashok/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/ai-ashok/subscriptions', 'organizations_url': 'https://api.github.com/users/ai-ashok/orgs', 'repos_url': 'https://api.github.com/users/ai-ashok/repos', 'events_url': 'https://api.github.com/users/ai-ashok/events{/privacy}', 'received_events_url': 'https://api.github.com/users/ai-ashok/received_events', 'type': 'User', 'site_admin': False} | [{'id': 3470211881, 'node_id': 'LA_kwDODunzps7O1zsp', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer', 'name': 'dataset-viewer', 'color': 'E5583E', 'default': False, 'description': 'Related to the dataset viewer on huggingface.co'}] | open | false | {'login': 'severo', 'id': 1676121, 'node_id': 'MDQ6VXNlcjE2NzYxMjE=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1676121?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/severo', 'html_url': 'https://github.com/severo', 'followers_url': 'https://api.github.com/users/severo/followers', 'following_url': 'https://api.github.com/users/severo/following{/other_user}', 'gists_url': 'https://api.github.com/users/severo/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/severo/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/severo/subscriptions', 'organizations_url': 'https://api.github.com/users/severo/orgs', 'repos_url': 'https://api.github.com/users/severo/repos', 'events_url': 'https://api.github.com/users/severo/events{/privacy}', 'received_events_url': 'https://api.github.com/users/severo/received_events', 'type': 'User', 'site_admin': False} | [{'login': 'severo', 'id': 1676121, 'node_id': 'MDQ6VXNlcjE2NzYxMjE=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1676121?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/severo', 'html_url': 'https://github.com/severo', 'followers_url': 'https://api.github.com/users/severo/followers', 'following_url': 'https://api.github.com/users/severo/following{/other_user}', 'gists_url': 'https://api.github.com/users/severo/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/severo/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/severo/subscriptions', 'organizations_url': 'https://api.github.com/users/severo/orgs', 'repos_url': 'https://api.github.com/users/severo/repos', 'events_url': 'https://api.github.com/users/severo/events{/privacy}', 'received_events_url': 'https://api.github.com/users/severo/received_events', 'type': 'User', 'site_admin': False}] | null | ['The dataset is empty, as far as I can tell: there are no files in the repository at https://huggingface.co/datasets/yanekyuk/wikikey/tree/main\r\n\r\nMaybe the viewer can display a better message for empty datasets'] | 2022-07-26 16:25:16 | 2022-07-26 18:07:37 | null | NONE | null | null | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4746/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4746/timeline | null | null | false |
216 | https://api.github.com/repos/huggingface/datasets/issues/4745 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4745/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4745/comments | https://api.github.com/repos/huggingface/datasets/issues/4745/events | https://github.com/huggingface/datasets/issues/4745 | 1,318,016,655 | I_kwDODunzps5Oj1aP | 4,745 | Allow `list_datasets` to include private datasets | {'login': 'ola13', 'id': 1528523, 'node_id': 'MDQ6VXNlcjE1Mjg1MjM=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1528523?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/ola13', 'html_url': 'https://github.com/ola13', 'followers_url': 'https://api.github.com/users/ola13/followers', 'following_url': 'https://api.github.com/users/ola13/following{/other_user}', 'gists_url': 'https://api.github.com/users/ola13/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/ola13/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/ola13/subscriptions', 'organizations_url': 'https://api.github.com/users/ola13/orgs', 'repos_url': 'https://api.github.com/users/ola13/repos', 'events_url': 'https://api.github.com/users/ola13/events{/privacy}', 'received_events_url': 'https://api.github.com/users/ola13/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}] | open | false | null | [] | null | ["Thanks for opening this issue :)\r\n\r\nIf it can help, I think you can already use `huggingface_hub` to achieve this:\r\n```python\r\n>>> from huggingface_hub import HfApi\r\n>>> [ds_info.id for ds_info in HfApi().list_datasets(use_auth_token=token) if ds_info.private]\r\n['bigscience/xxxx', 'bigscience-catalogue-data/xxxxxxx', ... ]\r\n```\r\n\r\n---------\r\n\r\nThough the latest versions of `huggingface_hub` that contain this feature are not available on python 3.6, so maybe we should first drop support for python 3.6 (see #4460) to update `list_datasets` in `datasets` as well (or we would have to copy/paste some `huggingface_hub` code)"
'Great, thanks @lhoestq the workaround works! I think it would be intuitive to have the support directly in `datasets` but it makes sense to wait given that the workaround exists :)'
"i also think that going forward we should replace more and more implementations inside datasets with the corresponding ones from `huggingface_hub` (same as we're doing in `transformers`)"] | 2022-07-26 10:16:08 | 2022-07-26 11:59:25 | null | NONE | null | null | null | I am working with a large collection of private datasets, it would be convenient for me to be able to list them.
I would envision extending the convention of using `use_auth_token` keyword argument to `list_datasets` function, then calling:
```
list_datasets(use_auth_token="my_token")
```
would return the list of all datasets I have permissions to view, including private ones. The only current alternative I see is to use the hub website to manually obtain the list of dataset names - this is in the context of BigScience where respective private spaces contain hundreds of datasets, so not very convenient to list manually. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4745/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4745/timeline | null | null | false |
217 | https://api.github.com/repos/huggingface/datasets/issues/4744 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4744/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4744/comments | https://api.github.com/repos/huggingface/datasets/issues/4744/events | https://github.com/huggingface/datasets/issues/4744 | 1,317,822,345 | I_kwDODunzps5OjF-J | 4,744 | Remove instructions to generate dummy data from our docs | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892861, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODYx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/documentation', 'name': 'documentation', 'color': '0075ca', 'default': True, 'description': 'Improvements or additions to documentation'}] | closed | false | {'login': 'stevhliu', 'id': 59462357, 'node_id': 'MDQ6VXNlcjU5NDYyMzU3', 'avatar_url': 'https://avatars.githubusercontent.com/u/59462357?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/stevhliu', 'html_url': 'https://github.com/stevhliu', 'followers_url': 'https://api.github.com/users/stevhliu/followers', 'following_url': 'https://api.github.com/users/stevhliu/following{/other_user}', 'gists_url': 'https://api.github.com/users/stevhliu/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/stevhliu/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/stevhliu/subscriptions', 'organizations_url': 'https://api.github.com/users/stevhliu/orgs', 'repos_url': 'https://api.github.com/users/stevhliu/repos', 'events_url': 'https://api.github.com/users/stevhliu/events{/privacy}', 'received_events_url': 'https://api.github.com/users/stevhliu/received_events', 'type': 'User', 'site_admin': False} | [{'login': 'stevhliu', 'id': 59462357, 'node_id': 'MDQ6VXNlcjU5NDYyMzU3', 'avatar_url': 'https://avatars.githubusercontent.com/u/59462357?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/stevhliu', 'html_url': 'https://github.com/stevhliu', 'followers_url': 'https://api.github.com/users/stevhliu/followers', 'following_url': 'https://api.github.com/users/stevhliu/following{/other_user}', 'gists_url': 'https://api.github.com/users/stevhliu/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/stevhliu/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/stevhliu/subscriptions', 'organizations_url': 'https://api.github.com/users/stevhliu/orgs', 'repos_url': 'https://api.github.com/users/stevhliu/repos', 'events_url': 'https://api.github.com/users/stevhliu/events{/privacy}', 'received_events_url': 'https://api.github.com/users/stevhliu/received_events', 'type': 'User', 'site_admin': False}] | null | ['Note that for me personally, conceptually all the dummy data (even for "canonical" datasets) should be superseded by `datasets-server`, which performs some kind of CI/CD of datasets (including the canonical ones)'
'I totally agree: next step should be rethinking if dummy data makes sense for canonical datasets (once we have datasets-server) and eventually remove it.\r\n\r\nBut for now, we could at least start by removing the indication to generate dummy data from our docs.'] | 2022-07-26 07:32:58 | 2022-08-02 23:50:30 | 2022-08-02 23:50:30 | MEMBER | null | null | null | In our docs, we indicate to generate the dummy data: https://huggingface.co/docs/datasets/dataset_script#testing-data-and-checksum-metadata
However:
- dummy data makes sense only for datasets in our GitHub repo: so that we can test their loading with our CI
- for datasets on the Hub:
- they do not pass any CI test requiring dummy data
- there are no instructions on how they can test their dataset locally using the dummy data
- the generation of the dummy data assumes our GitHub directory structure:
- the dummy data will be generated under `./datasets/<dataset_name>/dummy` even if locally there is no `./datasets` directory (which is the usual case). See issue:
- #4742
CC: @stevhliu | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4744/reactions', 'total_count': 1, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4744/timeline | null | completed | false |
218 | https://api.github.com/repos/huggingface/datasets/issues/4743 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4743/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4743/comments | https://api.github.com/repos/huggingface/datasets/issues/4743/events | https://github.com/huggingface/datasets/pull/4743 | 1,317,362,561 | PR_kwDODunzps48EUFs | 4,743 | Update map docs | {'login': 'stevhliu', 'id': 59462357, 'node_id': 'MDQ6VXNlcjU5NDYyMzU3', 'avatar_url': 'https://avatars.githubusercontent.com/u/59462357?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/stevhliu', 'html_url': 'https://github.com/stevhliu', 'followers_url': 'https://api.github.com/users/stevhliu/followers', 'following_url': 'https://api.github.com/users/stevhliu/following{/other_user}', 'gists_url': 'https://api.github.com/users/stevhliu/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/stevhliu/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/stevhliu/subscriptions', 'organizations_url': 'https://api.github.com/users/stevhliu/orgs', 'repos_url': 'https://api.github.com/users/stevhliu/repos', 'events_url': 'https://api.github.com/users/stevhliu/events{/privacy}', 'received_events_url': 'https://api.github.com/users/stevhliu/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892861, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODYx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/documentation', 'name': 'documentation', 'color': '0075ca', 'default': True, 'description': 'Improvements or additions to documentation'}] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-25 20:59:35 | 2022-07-27 16:22:04 | 2022-07-27 16:10:04 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4743', 'html_url': 'https://github.com/huggingface/datasets/pull/4743', 'diff_url': 'https://github.com/huggingface/datasets/pull/4743.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4743.patch', 'merged_at': datetime.datetime(2022, 7, 27, 16, 10, 4)} | This PR updates the `map` docs for processing text to include `return_tensors="np"` to make it run faster (see #4676). | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4743/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4743/timeline | null | null | true |
219 | https://api.github.com/repos/huggingface/datasets/issues/4742 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4742/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4742/comments | https://api.github.com/repos/huggingface/datasets/issues/4742/events | https://github.com/huggingface/datasets/issues/4742 | 1,317,260,663 | I_kwDODunzps5Og813 | 4,742 | Dummy data nowhere to be found | {'login': 'BramVanroy', 'id': 2779410, 'node_id': 'MDQ6VXNlcjI3Nzk0MTA=', 'avatar_url': 'https://avatars.githubusercontent.com/u/2779410?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/BramVanroy', 'html_url': 'https://github.com/BramVanroy', 'followers_url': 'https://api.github.com/users/BramVanroy/followers', 'following_url': 'https://api.github.com/users/BramVanroy/following{/other_user}', 'gists_url': 'https://api.github.com/users/BramVanroy/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/BramVanroy/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/BramVanroy/subscriptions', 'organizations_url': 'https://api.github.com/users/BramVanroy/orgs', 'repos_url': 'https://api.github.com/users/BramVanroy/repos', 'events_url': 'https://api.github.com/users/BramVanroy/events{/privacy}', 'received_events_url': 'https://api.github.com/users/BramVanroy/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | open | false | null | [] | null | ['Hi @BramVanroy, thanks for reporting.\r\n\r\nFirst of all, please note that you do not need the dummy data: this was the case when we were adding datasets to the `datasets` library (on this GitHub repo), so that we could test the correct loading of all datasets with our CI. However, this is no longer the case for datasets on the Hub.\r\n- We should definitely update our docs.\r\n\r\nSecond, the dummy data is generated locally:\r\n- in your case, the dummy data will be generated inside the directory: `./datasets/hebban-reviews/dummy`\r\n- please note the preceding `./datasets` directory: the reason for this is that the command to generate the dummy data was specifically created for our `datasets` library, and therefore assumes our directory structure: commands are run from the root directory of our GitHub repo, and datasets scripts are under `./datasets` \r\n\r\n\r\n '
'I have opened an Issue to update the instructions on dummy data generation:\r\n- #4744'] | 2022-07-25 19:18:42 | 2022-07-26 07:33:47 | null | CONTRIBUTOR | null | null | null | ## Describe the bug
To finalize my dataset, I wanted to create dummy data as per the guide and I ran
```shell
datasets-cli dummy_data datasets/hebban-reviews --auto_generate
```
where hebban-reviews is [this repo](https://huggingface.co/datasets/BramVanroy/hebban-reviews). And even though the scripts runs and shows a message at the end that it succeeded, I cannot find the dummy data anywhere. Where is it?
## Expected results
To see the dummy data in the datasets' folder or in the folder where I ran the command.
## Actual results
I see the following message but I cannot find the dummy data anywhere.
```
Dummy data generation done and dummy data test succeeded for config 'filtered''.
Automatic dummy data generation succeeded for all configs of '.\datasets\hebban-reviews\'
```
## Environment info
- `datasets` version: 2.4.1.dev0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.8
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4742/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4742/timeline | null | null | false |
220 | https://api.github.com/repos/huggingface/datasets/issues/4741 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4741/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4741/comments | https://api.github.com/repos/huggingface/datasets/issues/4741/events | https://github.com/huggingface/datasets/pull/4741 | 1,316,621,272 | PR_kwDODunzps48B2fl | 4,741 | Fix to dict conversion of `DatasetInfo`/`Features` | {'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/followers', 'following_url': 'https://api.github.com/users/mariosasko/following{/other_user}', 'gists_url': 'https://api.github.com/users/mariosasko/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mariosasko/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mariosasko/subscriptions', 'organizations_url': 'https://api.github.com/users/mariosasko/orgs', 'repos_url': 'https://api.github.com/users/mariosasko/repos', 'events_url': 'https://api.github.com/users/mariosasko/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mariosasko/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-25 10:41:27 | 2022-07-25 12:50:36 | 2022-07-25 12:37:53 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4741', 'html_url': 'https://github.com/huggingface/datasets/pull/4741', 'diff_url': 'https://github.com/huggingface/datasets/pull/4741.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4741.patch', 'merged_at': datetime.datetime(2022, 7, 25, 12, 37, 53)} | Fix #4681 | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4741/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4741/timeline | null | null | true |
221 | https://api.github.com/repos/huggingface/datasets/issues/4740 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4740/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4740/comments | https://api.github.com/repos/huggingface/datasets/issues/4740/events | https://github.com/huggingface/datasets/pull/4740 | 1,316,478,007 | PR_kwDODunzps48BX5l | 4,740 | Fix multiprocessing in map_nested | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'
'@lhoestq as a workaround to preserve previous behavior, the parameter `multiprocessing_min_length=16` is passed from `download` to `map_nested`, so that multiprocessing is only used if at least 16 files to be downloaded.\r\n\r\nNote that there is a small breaking change (I think previously it was unintended behavior, so that I have fixed it):\r\n- Before (with default `num_proc=16`) if there were 16 files to be downloaded, multiprocessing was not used\r\n- Now (with default `num_proc=16`) if there are 16 files to be downloaded, multiprocessing is used'
'Thanks for the workaround !'] | 2022-07-25 08:44:19 | 2022-07-28 10:53:23 | 2022-07-28 10:40:31 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4740', 'html_url': 'https://github.com/huggingface/datasets/pull/4740', 'diff_url': 'https://github.com/huggingface/datasets/pull/4740.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4740.patch', 'merged_at': datetime.datetime(2022, 7, 28, 10, 40, 31)} | As previously discussed:
Before, multiprocessing was not used in `map_nested` if `num_proc` was greater than or equal to `len(iterable)`.
- Multiprocessing was not used e.g. when passing `num_proc=20` but having 19 files to download
- As by default, `DownloadManager` sets `num_proc=16`, before multiprocessing was only used when `len(iterable)>16` by default
Now, if `num_proc` is greater than or equal to ``len(iterable)``, `num_proc` is set to ``len(iterable)`` and multiprocessing is used.
- We pass the variable `parallel_min_length=16`, so that multiprocessing is only used if at least 16 files to be downloaded
- ~As by default, `DownloadManager` sets `num_proc=16`, now multiprocessing is used when `len(iterable)>1` by default~
See discussion below.
~After having had to fix some tests (87602ac), I am wondering:~
- ~do we want to have multiprocessing by default?~
- ~please note that `DownloadManager.download` sets `num_proc=16` by default~
- ~or would it be better to ask the user to set it explicitly if they want multiprocessing (and default to `num_proc=1`)?~
Fix #4636.
CC: @nateraw | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4740/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4740/timeline | null | null | true |
222 | https://api.github.com/repos/huggingface/datasets/issues/4739 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4739/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4739/comments | https://api.github.com/repos/huggingface/datasets/issues/4739/events | https://github.com/huggingface/datasets/pull/4739 | 1,316,400,915 | PR_kwDODunzps48BHdE | 4,739 | Deprecate metrics | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'
'I mark this as Draft because the deprecated version number needs being updated after the latest release.'
'Perhaps now is the time to also update the `inspect_metric` from `evaluate` with the changes introduced in https://github.com/huggingface/datasets/pull/4433 (cc @lvwerra) '
'What do you think of including what changes users have to do to switch to `evaluate` in the warning message ?\r\n(basically replace `datasets.load_metric` by `evaluate.load`)\r\n\r\nI think it can help users migrate to `evaluate` and silence the warnings'] | 2022-07-25 07:35:55 | 2022-07-28 11:44:27 | 2022-07-28 11:32:16 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4739', 'html_url': 'https://github.com/huggingface/datasets/pull/4739', 'diff_url': 'https://github.com/huggingface/datasets/pull/4739.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4739.patch', 'merged_at': datetime.datetime(2022, 7, 28, 11, 32, 16)} | Deprecate metrics:
- deprecate public functions: `load_metric`, `list_metrics` and `inspect_metric`: docstring and warning
- test deprecation warnings are issues
- deprecate metrics in all docs
- remove mentions to metrics in docs and README
- deprecate internal functions/classes
Maybe we should also stop testing metrics? | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4739/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4739/timeline | null | null | true |
223 | https://api.github.com/repos/huggingface/datasets/issues/4738 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4738/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4738/comments | https://api.github.com/repos/huggingface/datasets/issues/4738/events | https://github.com/huggingface/datasets/pull/4738 | 1,315,222,166 | PR_kwDODunzps479hq4 | 4,738 | Use CI unit/integration tests | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'
'I think this PR can be merged. Willing to see it in action.\r\n\r\nCC: @lhoestq '] | 2022-07-22 16:48:00 | 2022-07-26 20:19:22 | 2022-07-26 20:07:05 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4738', 'html_url': 'https://github.com/huggingface/datasets/pull/4738', 'diff_url': 'https://github.com/huggingface/datasets/pull/4738.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4738.patch', 'merged_at': datetime.datetime(2022, 7, 26, 20, 7, 5)} | This PR:
- Implements separate unit/integration tests
- A fail in integration tests does not cancel the rest of the jobs
- We should implement more robust integration tests: work in progress in a subsequent PR
- For the moment, test involving network requests are marked as integration: to be evolved | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4738/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4738/timeline | null | null | true |
224 | https://api.github.com/repos/huggingface/datasets/issues/4737 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4737/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4737/comments | https://api.github.com/repos/huggingface/datasets/issues/4737/events | https://github.com/huggingface/datasets/issues/4737 | 1,315,011,004 | I_kwDODunzps5OYXm8 | 4,737 | Download error on scene_parse_150 | {'login': 'juliensimon', 'id': 3436143, 'node_id': 'MDQ6VXNlcjM0MzYxNDM=', 'avatar_url': 'https://avatars.githubusercontent.com/u/3436143?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/juliensimon', 'html_url': 'https://github.com/juliensimon', 'followers_url': 'https://api.github.com/users/juliensimon/followers', 'following_url': 'https://api.github.com/users/juliensimon/following{/other_user}', 'gists_url': 'https://api.github.com/users/juliensimon/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/juliensimon/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/juliensimon/subscriptions', 'organizations_url': 'https://api.github.com/users/juliensimon/orgs', 'repos_url': 'https://api.github.com/users/juliensimon/repos', 'events_url': 'https://api.github.com/users/juliensimon/events{/privacy}', 'received_events_url': 'https://api.github.com/users/juliensimon/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | open | false | null | [] | null | ["Hi! The server with the data seems to be down. I've reported this issue (https://github.com/CSAILVision/sceneparsing/issues/34) in the dataset repo. "] | 2022-07-22 13:28:28 | 2022-07-22 14:29:11 | null | NONE | null | null | null | ```
from datasets import load_dataset
dataset = load_dataset("scene_parse_150", "scene_parsing")
FileNotFoundError: Couldn't find file at http://data.csail.mit.edu/places/ADEchallenge/ADEChallengeData2016.zip
```
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4737/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4737/timeline | null | null | false |
225 | https://api.github.com/repos/huggingface/datasets/issues/4736 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4736/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4736/comments | https://api.github.com/repos/huggingface/datasets/issues/4736/events | https://github.com/huggingface/datasets/issues/4736 | 1,314,931,996 | I_kwDODunzps5OYEUc | 4,736 | Dataset Viewer issue for deepklarity/huggingface-spaces-dataset | {'login': 'dk-crazydiv', 'id': 47515542, 'node_id': 'MDQ6VXNlcjQ3NTE1NTQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47515542?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/dk-crazydiv', 'html_url': 'https://github.com/dk-crazydiv', 'followers_url': 'https://api.github.com/users/dk-crazydiv/followers', 'following_url': 'https://api.github.com/users/dk-crazydiv/following{/other_user}', 'gists_url': 'https://api.github.com/users/dk-crazydiv/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/dk-crazydiv/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/dk-crazydiv/subscriptions', 'organizations_url': 'https://api.github.com/users/dk-crazydiv/orgs', 'repos_url': 'https://api.github.com/users/dk-crazydiv/repos', 'events_url': 'https://api.github.com/users/dk-crazydiv/events{/privacy}', 'received_events_url': 'https://api.github.com/users/dk-crazydiv/received_events', 'type': 'User', 'site_admin': False} | [{'id': 3470211881, 'node_id': 'LA_kwDODunzps7O1zsp', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer', 'name': 'dataset-viewer', 'color': 'E5583E', 'default': False, 'description': 'Related to the dataset viewer on huggingface.co'}] | closed | false | {'login': 'severo', 'id': 1676121, 'node_id': 'MDQ6VXNlcjE2NzYxMjE=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1676121?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/severo', 'html_url': 'https://github.com/severo', 'followers_url': 'https://api.github.com/users/severo/followers', 'following_url': 'https://api.github.com/users/severo/following{/other_user}', 'gists_url': 'https://api.github.com/users/severo/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/severo/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/severo/subscriptions', 'organizations_url': 'https://api.github.com/users/severo/orgs', 'repos_url': 'https://api.github.com/users/severo/repos', 'events_url': 'https://api.github.com/users/severo/events{/privacy}', 'received_events_url': 'https://api.github.com/users/severo/received_events', 'type': 'User', 'site_admin': False} | [{'login': 'severo', 'id': 1676121, 'node_id': 'MDQ6VXNlcjE2NzYxMjE=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1676121?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/severo', 'html_url': 'https://github.com/severo', 'followers_url': 'https://api.github.com/users/severo/followers', 'following_url': 'https://api.github.com/users/severo/following{/other_user}', 'gists_url': 'https://api.github.com/users/severo/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/severo/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/severo/subscriptions', 'organizations_url': 'https://api.github.com/users/severo/orgs', 'repos_url': 'https://api.github.com/users/severo/repos', 'events_url': 'https://api.github.com/users/severo/events{/privacy}', 'received_events_url': 'https://api.github.com/users/severo/received_events', 'type': 'User', 'site_admin': False}] | null | ["Thanks for reporting. You're right, workers were under-provisioned due to a manual error, and the job queue was full. It's fixed now."] | 2022-07-22 12:14:18 | 2022-07-22 13:46:38 | 2022-07-22 13:46:38 | NONE | null | null | null | ### Link
https://huggingface.co/datasets/deepklarity/huggingface-spaces-dataset/viewer/deepklarity--huggingface-spaces-dataset/train
### Description
Hi Team,
I'm getting the following error on a uploaded dataset. I'm getting the same status for a couple of hours now. The dataset size is `<1MB` and the format is csv, so I'm not sure if it's supposed to take this much time or not.
```
Status code: 400
Exception: Status400Error
Message: The split is being processed. Retry later.
```
Is there any explicit step to be taken to get the viewer to work?
### Owner
Yes | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4736/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4736/timeline | null | completed | false |
226 | https://api.github.com/repos/huggingface/datasets/issues/4735 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4735/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4735/comments | https://api.github.com/repos/huggingface/datasets/issues/4735/events | https://github.com/huggingface/datasets/pull/4735 | 1,314,501,641 | PR_kwDODunzps477CuP | 4,735 | Pin rouge_score test dependency | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-22 07:18:21 | 2022-07-22 07:58:14 | 2022-07-22 07:45:18 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4735', 'html_url': 'https://github.com/huggingface/datasets/pull/4735', 'diff_url': 'https://github.com/huggingface/datasets/pull/4735.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4735.patch', 'merged_at': datetime.datetime(2022, 7, 22, 7, 45, 18)} | Temporarily pin `rouge_score` (to avoid latest version 0.7.0) until the issue is fixed.
Fix #4734 | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4735/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4735/timeline | null | null | true |
227 | https://api.github.com/repos/huggingface/datasets/issues/4734 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4734/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4734/comments | https://api.github.com/repos/huggingface/datasets/issues/4734/events | https://github.com/huggingface/datasets/issues/4734 | 1,314,495,382 | I_kwDODunzps5OWZuW | 4,734 | Package rouge-score cannot be imported | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | closed | false | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}] | null | ['We have added a comment on an existing issue opened in their repo: https://github.com/google-research/google-research/issues/1212#issuecomment-1192267130\r\n- https://github.com/google-research/google-research/issues/1212'] | 2022-07-22 07:15:05 | 2022-07-22 07:45:19 | 2022-07-22 07:45:18 | MEMBER | null | null | null | ## Describe the bug
After the today release of `rouge_score-0.0.7` it seems no longer importable. Our CI fails: https://github.com/huggingface/datasets/runs/7463218591?check_suite_focus=true
```
FAILED tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_bigbench
FAILED tests/test_dataset_common.py::LocalDatasetTest::test_builder_configs_bigbench
FAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_bigbench
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_rouge
```
with errors:
```
> from rouge_score import rouge_scorer
E ModuleNotFoundError: No module named 'rouge_score'
```
```
E ImportError: To be able to use rouge, you need to install the following dependency: rouge_score.
E Please install it using 'pip install rouge_score' for instance'
```
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4734/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4734/timeline | null | completed | false |
228 | https://api.github.com/repos/huggingface/datasets/issues/4733 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4733/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4733/comments | https://api.github.com/repos/huggingface/datasets/issues/4733/events | https://github.com/huggingface/datasets/issues/4733 | 1,314,479,616 | I_kwDODunzps5OWV4A | 4,733 | rouge metric | {'login': 'asking28', 'id': 29248466, 'node_id': 'MDQ6VXNlcjI5MjQ4NDY2', 'avatar_url': 'https://avatars.githubusercontent.com/u/29248466?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/asking28', 'html_url': 'https://github.com/asking28', 'followers_url': 'https://api.github.com/users/asking28/followers', 'following_url': 'https://api.github.com/users/asking28/following{/other_user}', 'gists_url': 'https://api.github.com/users/asking28/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/asking28/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/asking28/subscriptions', 'organizations_url': 'https://api.github.com/users/asking28/orgs', 'repos_url': 'https://api.github.com/users/asking28/repos', 'events_url': 'https://api.github.com/users/asking28/events{/privacy}', 'received_events_url': 'https://api.github.com/users/asking28/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | closed | false | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}] | null | ['Fixed by:\r\n- #4735'] | 2022-07-22 07:06:51 | 2022-07-22 09:08:02 | 2022-07-22 09:05:35 | NONE | null | null | null | ## Describe the bug
A clear and concise description of what the bug is.
Loading Rouge metric gives error after latest rouge-score==0.0.7 release.
Downgrading rougemetric==0.0.4 works fine.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
A clear and concise description of the expected results.
from rouge_score import rouge_scorer, scoring
should run
## Actual results
Specify the actual results or traceback.
File "/root/.cache/huggingface/modules/datasets_modules/metrics/rouge/0ffdb60f436bdb8884d5e4d608d53dbe108e82dac4f494a66f80ef3f647c104f/rouge.py", line 21, in <module>
from rouge_score import rouge_scorer, scoring
ImportError: cannot import name 'rouge_scorer' from 'rouge_score' (unknown location)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Linux
- Python version:3.9
- PyArrow version:
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4733/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4733/timeline | null | completed | false |
229 | https://api.github.com/repos/huggingface/datasets/issues/4732 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4732/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4732/comments | https://api.github.com/repos/huggingface/datasets/issues/4732/events | https://github.com/huggingface/datasets/issues/4732 | 1,314,371,566 | I_kwDODunzps5OV7fu | 4,732 | Document better that loading a dataset passing its name does not use the local script | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892861, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODYx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/documentation', 'name': 'documentation', 'color': '0075ca', 'default': True, 'description': 'Improvements or additions to documentation'}] | open | false | null | [] | null | ['Thanks for the feedback!\r\n\r\nI think since this issue is closely related to loading, I can add a clearer explanation under [Load > local loading script](https://huggingface.co/docs/datasets/main/en/loading#local-loading-script).'
'That makes sense but I think having a line about it under https://huggingface.co/docs/datasets/installation#source the "source" header here would be useful. My mental model of `pip install -e .` does not include the fact that the source files aren\'t actually being used. '
"Thanks for sharing your perspective. I think the `load_dataset` function is the only one that pulls from GitHub, and since this use-case is very specific, I don't think we need to include such a broad clarification in the Installation section.\r\n\r\nFeel free to check out the linked PR and let me know if it needs any additional explanation 😊"] | 2022-07-22 06:07:31 | 2022-08-01 20:32:13 | null | MEMBER | null | null | null | As reported by @TrentBrick here https://github.com/huggingface/datasets/issues/4725#issuecomment-1191858596, it could be more clear that loading a dataset by passing its name does not use the (modified) local script of it.
What he did:
- he installed `datasets` from source
- he modified locally `datasets/the_pile/the_pile.py` loading script
- he tried to load it but using `load_dataset("the_pile")` instead of `load_dataset("datasets/the_pile")`
- as explained here https://github.com/huggingface/datasets/issues/4725#issuecomment-1191040245:
- the former does not use the local script, but instead it downloads a copy of `the_pile.py` from our GitHub, caches it locally (inside `~/.cache/huggingface/modules`) and uses that.
He suggests adding a more clear explanation about this. He suggests adding it maybe in [Installation > source](https://huggingface.co/docs/datasets/installation))
CC: @stevhliu | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4732/reactions', 'total_count': 1, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4732/timeline | null | null | false |
230 | https://api.github.com/repos/huggingface/datasets/issues/4731 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4731/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4731/comments | https://api.github.com/repos/huggingface/datasets/issues/4731/events | https://github.com/huggingface/datasets/pull/4731 | 1,313,773,348 | PR_kwDODunzps474dlZ | 4,731 | docs: ✏️ fix TranslationVariableLanguages example | {'login': 'severo', 'id': 1676121, 'node_id': 'MDQ6VXNlcjE2NzYxMjE=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1676121?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/severo', 'html_url': 'https://github.com/severo', 'followers_url': 'https://api.github.com/users/severo/followers', 'following_url': 'https://api.github.com/users/severo/following{/other_user}', 'gists_url': 'https://api.github.com/users/severo/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/severo/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/severo/subscriptions', 'organizations_url': 'https://api.github.com/users/severo/orgs', 'repos_url': 'https://api.github.com/users/severo/repos', 'events_url': 'https://api.github.com/users/severo/events{/privacy}', 'received_events_url': 'https://api.github.com/users/severo/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-21 20:35:41 | 2022-07-22 07:01:00 | 2022-07-22 06:48:42 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4731', 'html_url': 'https://github.com/huggingface/datasets/pull/4731', 'diff_url': 'https://github.com/huggingface/datasets/pull/4731.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4731.patch', 'merged_at': datetime.datetime(2022, 7, 22, 6, 48, 42)} | null | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4731/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4731/timeline | null | null | true |
231 | https://api.github.com/repos/huggingface/datasets/issues/4730 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4730/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4730/comments | https://api.github.com/repos/huggingface/datasets/issues/4730/events | https://github.com/huggingface/datasets/issues/4730 | 1,313,421,263 | I_kwDODunzps5OSTfP | 4,730 | Loading imagenet-1k validation split takes much more RAM than expected | {'login': 'fxmarty', 'id': 9808326, 'node_id': 'MDQ6VXNlcjk4MDgzMjY=', 'avatar_url': 'https://avatars.githubusercontent.com/u/9808326?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/fxmarty', 'html_url': 'https://github.com/fxmarty', 'followers_url': 'https://api.github.com/users/fxmarty/followers', 'following_url': 'https://api.github.com/users/fxmarty/following{/other_user}', 'gists_url': 'https://api.github.com/users/fxmarty/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/fxmarty/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/fxmarty/subscriptions', 'organizations_url': 'https://api.github.com/users/fxmarty/orgs', 'repos_url': 'https://api.github.com/users/fxmarty/repos', 'events_url': 'https://api.github.com/users/fxmarty/events{/privacy}', 'received_events_url': 'https://api.github.com/users/fxmarty/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | closed | false | null | [] | null | ['My bad, `482 * 418 * 50000 * 3 / 1000000 = 30221 MB` ( https://stackoverflow.com/a/42979315 ).\r\n\r\nMeanwhile `256 * 256 * 50000 * 3 / 1000000 = 9830 MB`. We are loading the non-cropped images and that is why we take so much RAM.'] | 2022-07-21 15:14:06 | 2022-07-21 16:41:04 | 2022-07-21 16:41:04 | CONTRIBUTOR | null | null | null | ## Describe the bug
Loading into memory the validation split of imagenet-1k takes much more RAM than expected. Assuming ImageNet-1k is 150 GB, split is 50000 validation images and 1,281,167 train images, I would expect only about 6 GB loaded in RAM.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("imagenet-1k", split="validation")
print(dataset)
"""prints
Dataset({
features: ['image', 'label'],
num_rows: 50000
})
"""
pipe_inputs = dataset["image"]
# and wait :-)
```
## Expected results
Use only < 10 GB RAM when loading the images.
## Actual results
![image](https://user-images.githubusercontent.com/9808326/180249183-62f75ca4-d127-402a-9330-f12825a22b0a.png)
```
Using custom data configuration default
Reusing dataset imagenet-1k (/home/fxmarty/.cache/huggingface/datasets/imagenet-1k/default/1.0.0/a1e9bfc56c3a7350165007d1176b15e9128fcaf9ab972147840529aed3ae52bc)
Killed
```
## Environment info
- `datasets` version: 2.3.3.dev0
- Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.9.12
- PyArrow version: 7.0.0
- Pandas version: 1.3.5
- datasets commit: 4e4222f1b6362c2788aec0dd2cd8cede6dd17b80
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4730/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4730/timeline | null | completed | false |
232 | https://api.github.com/repos/huggingface/datasets/issues/4729 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4729/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4729/comments | https://api.github.com/repos/huggingface/datasets/issues/4729/events | https://github.com/huggingface/datasets/pull/4729 | 1,313,374,015 | PR_kwDODunzps473GmR | 4,729 | Refactor Hub tests | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-21 14:43:13 | 2022-07-22 15:09:49 | 2022-07-22 14:56:29 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4729', 'html_url': 'https://github.com/huggingface/datasets/pull/4729', 'diff_url': 'https://github.com/huggingface/datasets/pull/4729.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4729.patch', 'merged_at': datetime.datetime(2022, 7, 22, 14, 56, 29)} | This PR refactors `test_upstream_hub` by removing unittests and using the following pytest Hub fixtures:
- `ci_hub_config`
- `set_ci_hub_access_token`: to replace setUp/tearDown
- `temporary_repo` context manager: to replace `try... finally`
- `cleanup_repo`: to delete repo accidentally created if one of the tests fails
This is a preliminary work done to manage unit/integration tests separately. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4729/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4729/timeline | null | null | true |
233 | https://api.github.com/repos/huggingface/datasets/issues/4728 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4728/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4728/comments | https://api.github.com/repos/huggingface/datasets/issues/4728/events | https://github.com/huggingface/datasets/issues/4728 | 1,312,897,454 | I_kwDODunzps5OQTmu | 4,728 | load_dataset gives "403" error when using Financial Phrasebank | {'login': 'rohitvincent', 'id': 2209134, 'node_id': 'MDQ6VXNlcjIyMDkxMzQ=', 'avatar_url': 'https://avatars.githubusercontent.com/u/2209134?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/rohitvincent', 'html_url': 'https://github.com/rohitvincent', 'followers_url': 'https://api.github.com/users/rohitvincent/followers', 'following_url': 'https://api.github.com/users/rohitvincent/following{/other_user}', 'gists_url': 'https://api.github.com/users/rohitvincent/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/rohitvincent/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/rohitvincent/subscriptions', 'organizations_url': 'https://api.github.com/users/rohitvincent/orgs', 'repos_url': 'https://api.github.com/users/rohitvincent/repos', 'events_url': 'https://api.github.com/users/rohitvincent/events{/privacy}', 'received_events_url': 'https://api.github.com/users/rohitvincent/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['Hi @rohitvincent, thanks for reporting.\r\n\r\nUnfortunately I\'m not able to reproduce your issue:\r\n```python\r\nIn [2]: from datasets import load_dataset, DownloadMode\r\n ...: load_dataset(path=\'financial_phrasebank\',name=\'sentences_allagree\', download_mode="force_redownload")\r\nDownloading builder script: 6.04kB [00:00, 2.87MB/s] \r\nDownloading metadata: 13.7kB [00:00, 7.24MB/s] \r\nDownloading and preparing dataset financial_phrasebank/sentences_allagree (download: 665.91 KiB, generated: 296.26 KiB, post-processed: Unknown size, total: 962.17 KiB) to .../.cache/huggingface/datasets/financial_phrasebank/sentences_allagree/1.0.0/550bde12e6c30e2674da973a55f57edde5181d53f5a5a34c1531c53f93b7e141...\r\nDownloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 682k/682k [00:00<00:00, 7.66MB/s]\r\nDataset financial_phrasebank downloaded and prepared to .../.cache/huggingface/datasets/financial_phrasebank/sentences_allagree/1.0.0/550bde12e6c30e2674da973a55f57edde5181d53f5a5a34c1531c53f93b7e141. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 918.80it/s]\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: [\'sentence\', \'label\'],\r\n num_rows: 2264\r\n })\r\n})\r\n```\r\n\r\nAre you able to access the link? https://www.researchgate.net/profile/Pekka-Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip'
'Yes was able to download from the link manually. But still, get the same error when I use load_dataset.'
'Fixed once data files are hosted on the Hub:\r\n- #4598'] | 2022-07-21 08:43:32 | 2022-08-04 08:32:35 | 2022-08-04 08:32:35 | NONE | null | null | null | I tried both codes below to download the financial phrasebank dataset (https://huggingface.co/datasets/financial_phrasebank) with the sentences_allagree subset. However, the code gives a 403 error when executed from multiple machines locally or on the cloud.
```
from datasets import load_dataset, DownloadMode
load_dataset(path='financial_phrasebank',name='sentences_allagree',download_mode=DownloadMode.FORCE_REDOWNLOAD)
```
```
from datasets import load_dataset, DownloadMode
load_dataset(path='financial_phrasebank',name='sentences_allagree')
```
**Error**
ConnectionError: Couldn't reach https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip (error 403)
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4728/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4728/timeline | null | completed | false |
234 | https://api.github.com/repos/huggingface/datasets/issues/4727 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4727/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4727/comments | https://api.github.com/repos/huggingface/datasets/issues/4727/events | https://github.com/huggingface/datasets/issues/4727 | 1,312,645,391 | I_kwDODunzps5OPWEP | 4,727 | Dataset Viewer issue for TheNoob3131/mosquito-data | {'login': 'thenerd31', 'id': 53668030, 'node_id': 'MDQ6VXNlcjUzNjY4MDMw', 'avatar_url': 'https://avatars.githubusercontent.com/u/53668030?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/thenerd31', 'html_url': 'https://github.com/thenerd31', 'followers_url': 'https://api.github.com/users/thenerd31/followers', 'following_url': 'https://api.github.com/users/thenerd31/following{/other_user}', 'gists_url': 'https://api.github.com/users/thenerd31/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/thenerd31/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/thenerd31/subscriptions', 'organizations_url': 'https://api.github.com/users/thenerd31/orgs', 'repos_url': 'https://api.github.com/users/thenerd31/repos', 'events_url': 'https://api.github.com/users/thenerd31/events{/privacy}', 'received_events_url': 'https://api.github.com/users/thenerd31/received_events', 'type': 'User', 'site_admin': False} | [{'id': 3470211881, 'node_id': 'LA_kwDODunzps7O1zsp', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer', 'name': 'dataset-viewer', 'color': 'E5583E', 'default': False, 'description': 'Related to the dataset viewer on huggingface.co'}] | closed | false | null | [] | null | ['The preview is working OK:\r\n\r\n![Screenshot from 2022-07-21 09-46-09](https://user-images.githubusercontent.com/8515462/180158929-bd8faad4-6392-4fc1-8d9c-df38aa9f8438.png)\r\n\r\n'] | 2022-07-21 05:24:48 | 2022-07-21 07:51:56 | 2022-07-21 07:45:01 | NONE | null | null | null | ### Link
https://huggingface.co/datasets/TheNoob3131/mosquito-data/viewer/TheNoob3131--mosquito-data/test
### Description
Dataset preview not showing with large files. Says 'split cache is empty' even though there are train and test splits.
### Owner
_No response_ | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4727/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4727/timeline | null | completed | false |
235 | https://api.github.com/repos/huggingface/datasets/issues/4726 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4726/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4726/comments | https://api.github.com/repos/huggingface/datasets/issues/4726/events | https://github.com/huggingface/datasets/pull/4726 | 1,312,082,175 | PR_kwDODunzps47ykPI | 4,726 | Fix broken link to the Hub | {'login': 'stevhliu', 'id': 59462357, 'node_id': 'MDQ6VXNlcjU5NDYyMzU3', 'avatar_url': 'https://avatars.githubusercontent.com/u/59462357?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/stevhliu', 'html_url': 'https://github.com/stevhliu', 'followers_url': 'https://api.github.com/users/stevhliu/followers', 'following_url': 'https://api.github.com/users/stevhliu/following{/other_user}', 'gists_url': 'https://api.github.com/users/stevhliu/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/stevhliu/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/stevhliu/subscriptions', 'organizations_url': 'https://api.github.com/users/stevhliu/orgs', 'repos_url': 'https://api.github.com/users/stevhliu/repos', 'events_url': 'https://api.github.com/users/stevhliu/events{/privacy}', 'received_events_url': 'https://api.github.com/users/stevhliu/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-20 22:57:27 | 2022-07-21 14:33:18 | 2022-07-21 08:00:54 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4726', 'html_url': 'https://github.com/huggingface/datasets/pull/4726', 'diff_url': 'https://github.com/huggingface/datasets/pull/4726.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4726.patch', 'merged_at': datetime.datetime(2022, 7, 21, 8, 0, 54)} | The Markdown link fails to render if it is in the same line as the `<span>`. This PR implements @mishig25's fix by using `<a href=" ">` instead.
![Screen Shot 2022-07-20 at 3 53 05 PM](https://user-images.githubusercontent.com/59462357/180096412-7fbb33be-abb0-4e54-a52d-201b3b58e0f9.png) | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4726/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4726/timeline | null | null | true |
236 | https://api.github.com/repos/huggingface/datasets/issues/4725 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4725/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4725/comments | https://api.github.com/repos/huggingface/datasets/issues/4725/events | https://github.com/huggingface/datasets/issues/4725 | 1,311,907,096 | I_kwDODunzps5OMh0Y | 4,725 | the_pile datasets URL broken. | {'login': 'TrentBrick', 'id': 12433427, 'node_id': 'MDQ6VXNlcjEyNDMzNDI3', 'avatar_url': 'https://avatars.githubusercontent.com/u/12433427?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/TrentBrick', 'html_url': 'https://github.com/TrentBrick', 'followers_url': 'https://api.github.com/users/TrentBrick/followers', 'following_url': 'https://api.github.com/users/TrentBrick/following{/other_user}', 'gists_url': 'https://api.github.com/users/TrentBrick/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/TrentBrick/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/TrentBrick/subscriptions', 'organizations_url': 'https://api.github.com/users/TrentBrick/orgs', 'repos_url': 'https://api.github.com/users/TrentBrick/repos', 'events_url': 'https://api.github.com/users/TrentBrick/events{/privacy}', 'received_events_url': 'https://api.github.com/users/TrentBrick/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | closed | false | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}] | null | ['Thanks for reporting, @TrentBrick. We are addressing the change with their data host server.\r\n\r\nOn the meantime, if you would like to work with your fixed local copy of the_pile script, you should use:\r\n```python\r\nload_dataset("path/to/your/local/the_pile/the_pile.py",...\r\n```\r\ninstead of just `load_dataset("the_pile",...`.\r\n\r\nThe latter downloads a copy of `the_pile.py` from our GitHub, caches it locally (inside `~/.cache/huggingface/modules`) and uses that.'
'@TrentBrick, I have checked the URLs and both hosts work, the original (https://the-eye.eu/) and the mirror (https://mystic.the-eye.eu/). See e.g.:\r\n- https://mystic.the-eye.eu/public/AI/pile/\r\n- https://mystic.the-eye.eu/public/AI/pile_preliminary_components/\r\n\r\nPlease, let me know if you still find any issue loading this dataset by using current server URLs.'
"Great this is working now. Re the download from GitHub... I'm sure thought went into doing this but could it be made more clear maybe here? https://huggingface.co/docs/datasets/installation for example under installing from source? I spent over an hour questioning my sanity as I kept trying to edit this file, uninstall and reinstall the repo, git reset to previous versions of the file etc."
'Thanks for the quick reply and help too\r\n'
"Thanks @TrentBrick for the suggestion about improving our docs: we should definitely do this if you find they are not clear enough.\r\n\r\nCurrently, our docs explain how to load a dataset from a local loading script here: [Load > Local loading script](https://huggingface.co/docs/datasets/loading#local-loading-script)\r\n\r\nI've opened an issue here:\r\n- #4732\r\n\r\nFeel free to comment on it any additional explanation/suggestion/requirement related to this problem."] | 2022-07-20 20:57:30 | 2022-07-22 06:09:46 | 2022-07-21 07:38:19 | NONE | null | null | null | https://github.com/huggingface/datasets/pull/3627 changed the Eleuther AI Pile dataset URL from https://the-eye.eu/ to https://mystic.the-eye.eu/ but the latter is now broken and the former works again.
Note that when I git clone the repo and use `pip install -e .` and then edit the URL back the codebase doesn't seem to use this edit so the mystic URL is also cached somewhere else that I can't find? | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4725/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4725/timeline | null | completed | false |
237 | https://api.github.com/repos/huggingface/datasets/issues/4724 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4724/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4724/comments | https://api.github.com/repos/huggingface/datasets/issues/4724/events | https://github.com/huggingface/datasets/pull/4724 | 1,311,127,404 | PR_kwDODunzps47vLrP | 4,724 | Download and prepare as Parquet for cloud storage | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False} | [] | open | false | null | [] | null | ['The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4724). All of your documentation changes will be reflected on that endpoint.'
'Added some docs for dask and took your comments into account\r\n\r\ncc @philschmid if you also want to take a look :)'
'Just noticed that it would be more convenient to pass the output dir to download_and_prepare directly, to bypass the caching logic which prepares the dataset at `<cache_dir>/<name>/<version>/<hash>/`. And this way the cache is only used for the downloaded files. What do you think ?\r\n\r\n```python \r\n\r\nbuilder = load_datadet_builder("squad")\r\n# or with a custom cache\r\nbuilder = load_datadet_builder("squad", cache_dir="path/to/local/cache/for/downloaded/files")\r\n\r\n# download and prepare to s3\r\nbuilder.download_and_prepare("s3://my_bucket/squad")\r\n```'
'Might be of interest: \r\nPyTorch and AWS introduced better support for S3 streaming in `torchtext`. \r\n![image](https://user-images.githubusercontent.com/32632186/183354186-a7f005e3-4167-4d80-ad1a-c62dd51ad7b6.png)\r\n'
'Having thought about it a bit more, I also agree with @philschmid in that it\'s important to follow the existing APIs (pandas/dask), which means we should support the following at some point:\r\n\r\n* remote data files resolution for the packaged modules to support `load_dataset("<format>", data_files="<fs_url>")`\r\n* `to_<format>("<fs_url>")`\r\n* `load_from_disk` and `save_to_disk` already expose the `fs` param, but it would be cool to support specifying `fsspec` URLs directly as the source/destination path (perhaps we can then deprecate `fs` to be fully aligned with pandas/dask)\r\n\r\nIMO these are the two main issues with the current approach:\r\n* relying on the builder API to generate the formatted files results in a non-friendly format due to how our caching works (a lot of nested subdirectories)\r\n* this approach still downloads the files needed to generate a dataset locally. Considering one of our goals is to align the streaming API with the non-streaming one, this could be avoided by running `to_<format>` on streamed/iterable datasets'] | 2022-07-20 13:39:02 | 2022-08-10 12:58:02 | null | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4724', 'html_url': 'https://github.com/huggingface/datasets/pull/4724', 'diff_url': 'https://github.com/huggingface/datasets/pull/4724.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4724.patch', 'merged_at': None} | Download a dataset as Parquet in a cloud storage can be useful for streaming mode and to use with spark/dask/ray.
This PR adds support for `fsspec` URIs like `s3://...`, `gcs://...` etc. and ads the `file_format` to save as parquet instead of arrow:
```python
from datasets import *
cache_dir = "s3://..."
builder = load_dataset_builder("crime_and_punish", cache_dir=cache_dir)
builder.download_and_prepare(file_format="parquet")
```
credentials to cloud storage can be passed using the `storage_options` argument in `load_dataset_builder`
For consistency with the BeamBasedBuilder, I name the parquet files `{builder.name}-{split}-xxxxx-of-xxxxx.parquet`. I think this is fine since we'll need to implement parquet sharding after this PR, so that a dataset can be used efficiently with dask for example.
Note that images/audio files are not embedded yet in the parquet files, this will added in a subsequent PR
TODO:
- [x] docs
- [x] tests | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4724/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4724/timeline | null | null | true |
238 | https://api.github.com/repos/huggingface/datasets/issues/4723 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4723/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4723/comments | https://api.github.com/repos/huggingface/datasets/issues/4723/events | https://github.com/huggingface/datasets/pull/4723 | 1,310,970,604 | PR_kwDODunzps47uoSj | 4,723 | Refactor conftest fixtures | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-20 12:15:22 | 2022-07-21 14:37:11 | 2022-07-21 14:24:18 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4723', 'html_url': 'https://github.com/huggingface/datasets/pull/4723', 'diff_url': 'https://github.com/huggingface/datasets/pull/4723.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4723.patch', 'merged_at': datetime.datetime(2022, 7, 21, 14, 24, 18)} | Previously, fixture modules `hub_fixtures` and `s3_fixtures`:
- were both at the root test directory
- were imported using `import *`
- as a side effect, the modules `os` and `pytest` were imported from `s3_fixtures` into `conftest`
This PR:
- puts both fixture modules in a dedicated directory `fixtures`
- renames both to: `fixtures.hub` and `fixtures.s3`
- imports them into `conftest` as plugins, using the `pytest_plugins`: this avoids the `import *`
- additionally creates a new fixture module `fixtures.files` with all file-related fixtures | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4723/reactions', 'total_count': 1, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 1, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4723/timeline | null | null | true |
239 | https://api.github.com/repos/huggingface/datasets/issues/4722 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4722/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4722/comments | https://api.github.com/repos/huggingface/datasets/issues/4722/events | https://github.com/huggingface/datasets/pull/4722 | 1,310,785,916 | PR_kwDODunzps47t_HJ | 4,722 | Docs: Fix same-page haslinks | {'login': 'mishig25', 'id': 11827707, 'node_id': 'MDQ6VXNlcjExODI3NzA3', 'avatar_url': 'https://avatars.githubusercontent.com/u/11827707?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mishig25', 'html_url': 'https://github.com/mishig25', 'followers_url': 'https://api.github.com/users/mishig25/followers', 'following_url': 'https://api.github.com/users/mishig25/following{/other_user}', 'gists_url': 'https://api.github.com/users/mishig25/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mishig25/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mishig25/subscriptions', 'organizations_url': 'https://api.github.com/users/mishig25/orgs', 'repos_url': 'https://api.github.com/users/mishig25/repos', 'events_url': 'https://api.github.com/users/mishig25/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mishig25/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-20 10:04:37 | 2022-07-20 17:02:33 | 2022-07-20 16:49:36 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4722', 'html_url': 'https://github.com/huggingface/datasets/pull/4722', 'diff_url': 'https://github.com/huggingface/datasets/pull/4722.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4722.patch', 'merged_at': datetime.datetime(2022, 7, 20, 16, 49, 36)} | `href="/docs/datasets/quickstart#audio"` implicitly goes to `href="/docs/datasets/{$LATEST_STABLE_VERSION}/quickstart#audio"`. Therefore, https://huggingface.co/docs/datasets/quickstart#audio #audio hashlink does not work since the new docs were not added to v2.3.2 (LATEST_STABLE_VERSION)
to preserve the version, it should be just `href="#audio"`, which will implicilty go to curren_page + #audio element | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4722/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4722/timeline | null | null | true |
240 | https://api.github.com/repos/huggingface/datasets/issues/4721 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4721/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4721/comments | https://api.github.com/repos/huggingface/datasets/issues/4721/events | https://github.com/huggingface/datasets/issues/4721 | 1,310,253,552 | I_kwDODunzps5OGOHw | 4,721 | PyArrow Dataset error when calling `load_dataset` | {'login': 'piraka9011', 'id': 16828657, 'node_id': 'MDQ6VXNlcjE2ODI4NjU3', 'avatar_url': 'https://avatars.githubusercontent.com/u/16828657?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/piraka9011', 'html_url': 'https://github.com/piraka9011', 'followers_url': 'https://api.github.com/users/piraka9011/followers', 'following_url': 'https://api.github.com/users/piraka9011/following{/other_user}', 'gists_url': 'https://api.github.com/users/piraka9011/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/piraka9011/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/piraka9011/subscriptions', 'organizations_url': 'https://api.github.com/users/piraka9011/orgs', 'repos_url': 'https://api.github.com/users/piraka9011/repos', 'events_url': 'https://api.github.com/users/piraka9011/events{/privacy}', 'received_events_url': 'https://api.github.com/users/piraka9011/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | open | false | null | [] | null | ["Hi ! It looks like a bug in `pyarrow`. If you manage to end up with only one chunk per parquet file it should workaround this issue.\r\n\r\nTo achieve that you can try to lower the value of `max_shard_size` and also don't use `map` before `push_to_hub`.\r\n\r\nDo you have a minimum reproducible example that we can share with the Arrow team for further debugging ?"
"> If you manage to end up with only one chunk per parquet file it should workaround this issue.\r\n\r\nYup, I did not encounter this bug when I was testing my script with a slice of <1000 samples for my dataset.\r\n\r\n> Do you have a minimum reproducible example...\r\n\r\nNot sure if I can get more minimal than the script I shared above. Are you asking for a sample json file?\r\nJust generate a random manifest list, I can add that to the above script if that's what you mean?\r\n"
'Actually this is probably linked to this open issue: https://issues.apache.org/jira/browse/ARROW-5030.\r\n\r\nsetting `max_shard_size="2GB"` should do the job (or `max_shard_size="1GB"` if you want to be on the safe side, especially given that there can be some variance in the shard sizes if the dataset is not evenly distributed)'] | 2022-07-20 01:16:03 | 2022-07-22 14:11:47 | null | NONE | null | null | null | ## Describe the bug
I am fine tuning a wav2vec2 model following the script here using my own dataset: https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py
Loading my Audio dataset from the hub which was originally generated from disk results in the following PyArrow error:
```sh
File "/home/ubuntu/w2v2/run_speech_recognition_ctc.py", line 227, in main
raw_datasets = load_dataset(
File "/home/ubuntu/.virtualenvs/meval/lib/python3.8/site-packages/datasets/load.py", line 1679, in load_dataset
builder_instance.download_and_prepare(
File "/home/ubuntu/.virtualenvs/meval/lib/python3.8/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/ubuntu/.virtualenvs/meval/lib/python3.8/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/ubuntu/.virtualenvs/meval/lib/python3.8/site-packages/datasets/builder.py", line 1268, in _prepare_split
for key, table in logging.tqdm(
File "/home/ubuntu/.virtualenvs/meval/lib/python3.8/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/home/ubuntu/.virtualenvs/meval/lib/python3.8/site-packages/datasets/packaged_modules/parquet/parquet.py", line 68, in _generate_tables
for batch_idx, record_batch in enumerate(
File "pyarrow/_parquet.pyx", line 1309, in iter_batches
File "pyarrow/error.pxi", line 121, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs
```
## Steps to reproduce the bug
I created a dataset from a JSON lines manifest of `audio_filepath`, `text`, and `duration`.
When creating the dataset, I do something like this:
```python
import json
from datasets import Dataset, Audio
# manifest_lines is a list of dicts w/ "audio_filepath", "duration", and "text
for line in manifest_lines:
line = line.strip()
if line:
line_dict = json.loads(line)
manifest_dict["audio"].append(f"{root_path}/{line_dict['audio_filepath']}")
manifest_dict["duration"].append(line_dict["duration"])
manifest_dict["transcription"].append(line_dict["text"])
# Create a HF dataset
dataset = Dataset.from_dict(manifest_dict).cast_column(
"audio", Audio(sampling_rate=16_000),
)
# From the docs for saving to disk
# https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Dataset.save_to_disk
def read_audio_file(example):
with open(example["audio"]["path"], "rb") as f:
return {"audio": {"bytes": f.read()}}
dataset = dataset.map(read_audio_file, num_proc=70)
dataset.save_to_disk(f"/audio-data/hf/{artifact_name}")
dataset.push_to_hub(f"{org-name}/{artifact_name}", max_shard_size="5GB", private=True)
```
Then when I call `load_dataset()` in my training script, with the same dataset I generated above, and download from the huggingface hub I get the above stack trace.
I am able to load the dataset fine if I use `load_from_disk()`.
## Expected results
`load_dataset()` should behave just like `load_from_disk()` and not cause any errors.
## Actual results
See above
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
I am using the `huggingface/transformers-pytorch-gpu:latest` image
- `datasets` version: 2.3.0
- Platform: Docker/Ubuntu 20.04
- Python version: 3.8
- PyArrow version: 8.0.0
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4721/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4721/timeline | null | null | false |
241 | https://api.github.com/repos/huggingface/datasets/issues/4720 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4720/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4720/comments | https://api.github.com/repos/huggingface/datasets/issues/4720/events | https://github.com/huggingface/datasets/issues/4720 | 1,309,980,195 | I_kwDODunzps5OFLYj | 4,720 | Dataset Viewer issue for shamikbose89/lancaster_newsbooks | {'login': 'shamikbose', 'id': 50837285, 'node_id': 'MDQ6VXNlcjUwODM3Mjg1', 'avatar_url': 'https://avatars.githubusercontent.com/u/50837285?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/shamikbose', 'html_url': 'https://github.com/shamikbose', 'followers_url': 'https://api.github.com/users/shamikbose/followers', 'following_url': 'https://api.github.com/users/shamikbose/following{/other_user}', 'gists_url': 'https://api.github.com/users/shamikbose/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/shamikbose/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/shamikbose/subscriptions', 'organizations_url': 'https://api.github.com/users/shamikbose/orgs', 'repos_url': 'https://api.github.com/users/shamikbose/repos', 'events_url': 'https://api.github.com/users/shamikbose/events{/privacy}', 'received_events_url': 'https://api.github.com/users/shamikbose/received_events', 'type': 'User', 'site_admin': False} | [] | open | false | null | [] | null | ['It seems like the list of splits could not be obtained:\r\n\r\n```python\r\n>>> from datasets import get_dataset_split_names\r\n>>> get_dataset_split_names("shamikbose89/lancaster_newsbooks", "default")\r\nUsing custom data configuration default\r\nTraceback (most recent call last):\r\n File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 354, in get_dataset_config_info\r\n for split_generator in builder._split_generators(\r\n File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/shamikbose89--lancaster_newsbooks/2d1c63d269bf7b9342accce0a95960b1710ab4bc774248878bd80eb96c1afaf7/lancaster_newsbooks.py", line 73, in _split_generators\r\n data_dir = dl_manager.download_and_extract(_URL)\r\n File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 916, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 879, in extract\r\n urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)\r\n File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 348, in map_nested\r\n return function(data_struct)\r\n File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 884, in _extract\r\n protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 388, in _get_extraction_protocol\r\n return _get_extraction_protocol_with_magic_number(f)\r\n File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 354, in _get_extraction_protocol_with_magic_number\r\n f.seek(0)\r\n File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 684, in seek\r\n raise ValueError("Cannot seek streaming HTTP file")\r\nValueError: Cannot seek streaming HTTP file\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File "<stdin>", line 1, in <module>\r\n File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 404, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 359, in get_dataset_config_info\r\n raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```\r\n\r\nping @huggingface/datasets '
"Oh, I removed the 'split' key from `kwargs`. I put it back in, but there's still the same error"
"It looks like the data host doesn't support http range requests, which is necessary to glob inside a ZIP archive in streaming mode. Can you try hosting the dataset elsewhere ? Or download each file separately from https://ota.bodleian.ox.ac.uk/repository/xmlui/handle/20.500.12024/2531 ?"
'@lhoestq Thanks! That seems to have solved it. I can get the splits with the `get_dataset_split_names()` function. The dataset viewer is still not loading properly, though. The new error is\r\n```\r\nStatus code: 400\r\nException: BadZipFile\r\nMessage: File is not a zip file\r\n```\r\n\r\nPS. The dataset loads properly and can be accessed'] | 2022-07-19 20:00:07 | 2022-07-20 17:06:02 | null | NONE | null | null | null | ### Link
https://huggingface.co/datasets/shamikbose89/lancaster_newsbooks
### Description
Status code: 400
Exception: ValueError
Message: Cannot seek streaming HTTP file
I am able to use the dataset loading script locally and it also runs when I'm using the one from the hub, but the viewer still doesn't load
### Owner
Yes | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4720/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4720/timeline | null | null | false |
242 | https://api.github.com/repos/huggingface/datasets/issues/4719 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4719/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4719/comments | https://api.github.com/repos/huggingface/datasets/issues/4719/events | https://github.com/huggingface/datasets/issues/4719 | 1,309,854,492 | I_kwDODunzps5OEssc | 4,719 | Issue loading TheNoob3131/mosquito-data dataset | {'login': 'thenerd31', 'id': 53668030, 'node_id': 'MDQ6VXNlcjUzNjY4MDMw', 'avatar_url': 'https://avatars.githubusercontent.com/u/53668030?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/thenerd31', 'html_url': 'https://github.com/thenerd31', 'followers_url': 'https://api.github.com/users/thenerd31/followers', 'following_url': 'https://api.github.com/users/thenerd31/following{/other_user}', 'gists_url': 'https://api.github.com/users/thenerd31/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/thenerd31/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/thenerd31/subscriptions', 'organizations_url': 'https://api.github.com/users/thenerd31/orgs', 'repos_url': 'https://api.github.com/users/thenerd31/repos', 'events_url': 'https://api.github.com/users/thenerd31/events{/privacy}', 'received_events_url': 'https://api.github.com/users/thenerd31/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ["I am also getting a ValueError: 'Couldn't cast' at the bottom. Is this because of some delimiter issue? My dataset is on the Huggingface Hub. If you could look at it, that would be greatly appreciated."
"Hi @thenerd31, thanks for reporting.\r\n\r\nPlease note that your issue is not caused by the Hugging Face Datasets library, but it has to do with the specific implementation of your dataset on the Hub.\r\n\r\nTherefore, I'm transferring this discussion to your own dataset Community tab: https://huggingface.co/datasets/TheNoob3131/mosquito-data/discussions/1"] | 2022-07-19 17:47:37 | 2022-07-20 06:46:57 | 2022-07-20 06:46:02 | NONE | null | null | null | ![image](https://user-images.githubusercontent.com/53668030/179815591-d75fa7d3-3122-485f-a852-b06a68909066.png)
So my dataset is public in the Huggingface Hub, but when I try to load it using the load_dataset command, it shows that it is downloading the files, but throws a ValueError. When I went to my directory to see if the files were downloaded, the folder was blank.
Here is the error below:
ValueError Traceback (most recent call last)
Input In [8], in <cell line: 3>()
1 from datasets import load_dataset
----> 3 dataset = load_dataset("TheNoob3131/mosquito-data", split="train")
File ~\Anaconda3\lib\site-packages\datasets\load.py:1679, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1676 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1678 # Download and prepare data
-> 1679 builder_instance.download_and_prepare(
1680 download_config=download_config,
1681 download_mode=download_mode,
1682 ignore_verifications=ignore_verifications,
1683 try_from_hf_gcs=try_from_hf_gcs,
1684 use_auth_token=use_auth_token,
1685 )
1687 # Build dataset for splits
1688 keep_in_memory = (
1689 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1690 )
Is the dataset in the wrong format or is there some security permission that I should enable? | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4719/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4719/timeline | null | completed | false |
243 | https://api.github.com/repos/huggingface/datasets/issues/4718 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4718/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4718/comments | https://api.github.com/repos/huggingface/datasets/issues/4718/events | https://github.com/huggingface/datasets/pull/4718 | 1,309,520,453 | PR_kwDODunzps47prWR | 4,718 | Make Extractor accept Path as input | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-19 13:25:06 | 2022-07-22 13:42:27 | 2022-07-22 13:29:43 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4718', 'html_url': 'https://github.com/huggingface/datasets/pull/4718', 'diff_url': 'https://github.com/huggingface/datasets/pull/4718.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4718.patch', 'merged_at': datetime.datetime(2022, 7, 22, 13, 29, 43)} | This PR:
- Makes `Extractor` accept instance of `Path` as input
- Removes unnecessary castings of `Path` to `str` | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4718/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4718/timeline | null | null | true |
244 | https://api.github.com/repos/huggingface/datasets/issues/4717 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4717/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4717/comments | https://api.github.com/repos/huggingface/datasets/issues/4717/events | https://github.com/huggingface/datasets/issues/4717 | 1,309,512,483 | I_kwDODunzps5ODZMj | 4,717 | Dataset Viewer issue for LawalAfeez/englishreview-ds-mini | {'login': 'lawalAfeez820', 'id': 69974956, 'node_id': 'MDQ6VXNlcjY5OTc0OTU2', 'avatar_url': 'https://avatars.githubusercontent.com/u/69974956?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lawalAfeez820', 'html_url': 'https://github.com/lawalAfeez820', 'followers_url': 'https://api.github.com/users/lawalAfeez820/followers', 'following_url': 'https://api.github.com/users/lawalAfeez820/following{/other_user}', 'gists_url': 'https://api.github.com/users/lawalAfeez820/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lawalAfeez820/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lawalAfeez820/subscriptions', 'organizations_url': 'https://api.github.com/users/lawalAfeez820/orgs', 'repos_url': 'https://api.github.com/users/lawalAfeez820/repos', 'events_url': 'https://api.github.com/users/lawalAfeez820/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lawalAfeez820/received_events', 'type': 'User', 'site_admin': False} | [{'id': 3470211881, 'node_id': 'LA_kwDODunzps7O1zsp', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer', 'name': 'dataset-viewer', 'color': 'E5583E', 'default': False, 'description': 'Related to the dataset viewer on huggingface.co'}] | closed | false | {'login': 'severo', 'id': 1676121, 'node_id': 'MDQ6VXNlcjE2NzYxMjE=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1676121?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/severo', 'html_url': 'https://github.com/severo', 'followers_url': 'https://api.github.com/users/severo/followers', 'following_url': 'https://api.github.com/users/severo/following{/other_user}', 'gists_url': 'https://api.github.com/users/severo/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/severo/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/severo/subscriptions', 'organizations_url': 'https://api.github.com/users/severo/orgs', 'repos_url': 'https://api.github.com/users/severo/repos', 'events_url': 'https://api.github.com/users/severo/events{/privacy}', 'received_events_url': 'https://api.github.com/users/severo/received_events', 'type': 'User', 'site_admin': False} | [{'login': 'severo', 'id': 1676121, 'node_id': 'MDQ6VXNlcjE2NzYxMjE=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1676121?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/severo', 'html_url': 'https://github.com/severo', 'followers_url': 'https://api.github.com/users/severo/followers', 'following_url': 'https://api.github.com/users/severo/following{/other_user}', 'gists_url': 'https://api.github.com/users/severo/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/severo/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/severo/subscriptions', 'organizations_url': 'https://api.github.com/users/severo/orgs', 'repos_url': 'https://api.github.com/users/severo/repos', 'events_url': 'https://api.github.com/users/severo/events{/privacy}', 'received_events_url': 'https://api.github.com/users/severo/received_events', 'type': 'User', 'site_admin': False}] | null | ['It\'s currently working, as far as I understand\r\n\r\nhttps://huggingface.co/datasets/LawalAfeez/englishreview-ds-mini/viewer/LawalAfeez--englishreview-ds-mini/train\r\n\r\n<img width="1556" alt="Capture d’écran 2022-07-19 à 09 24 01" src="https://user-images.githubusercontent.com/1676121/179761130-2d7980b9-c0f6-4093-8b1d-f0a3872fef3f.png">\r\n\r\n---\r\n\r\nWhat was your issue?'] | 2022-07-19 13:19:39 | 2022-07-20 08:32:57 | 2022-07-20 08:32:57 | NONE | null | null | null | ### Link
_No response_
### Description
Unable to view the split data
### Owner
_No response_ | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4717/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4717/timeline | null | completed | false |
245 | https://api.github.com/repos/huggingface/datasets/issues/4716 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4716/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4716/comments | https://api.github.com/repos/huggingface/datasets/issues/4716/events | https://github.com/huggingface/datasets/pull/4716 | 1,309,455,838 | PR_kwDODunzps47pdbh | 4,716 | Support "tags" yaml tag | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'
"IMO `DatasetMetadata` shouldn't crash with attributes that it doesn't know, btw"
'Yea this PR is mostly to have a validation that this field contains a list of strings.\r\n\r\nRegarding unknown fields, the tagging app currently returns an error if a field is unknown using the `DatasetMetadata`. We can change that though'] | 2022-07-19 12:34:31 | 2022-07-20 13:44:50 | 2022-07-20 13:31:56 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4716', 'html_url': 'https://github.com/huggingface/datasets/pull/4716', 'diff_url': 'https://github.com/huggingface/datasets/pull/4716.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4716.patch', 'merged_at': datetime.datetime(2022, 7, 20, 13, 31, 56)} | Added the "tags" YAML tag, so that users can specify data domain/topics keywords for dataset search | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4716/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4716/timeline | null | null | true |
246 | https://api.github.com/repos/huggingface/datasets/issues/4715 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4715/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4715/comments | https://api.github.com/repos/huggingface/datasets/issues/4715/events | https://github.com/huggingface/datasets/pull/4715 | 1,309,405,980 | PR_kwDODunzps47pSui | 4,715 | Fix POS tags | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'
'CI failures are about missing content in the dataset cards or bad tags, and this is unrelated to this PR. Merging :)'] | 2022-07-19 11:52:54 | 2022-07-19 12:54:34 | 2022-07-19 12:41:16 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4715', 'html_url': 'https://github.com/huggingface/datasets/pull/4715', 'diff_url': 'https://github.com/huggingface/datasets/pull/4715.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4715.patch', 'merged_at': datetime.datetime(2022, 7, 19, 12, 41, 15)} | We're now using `part-of-speech` and not `part-of-speech-tagging`, see discussion here: https://github.com/huggingface/datasets/commit/114c09aff2fa1519597b46fbcd5a8e0c0d3ae020#r78794777 | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4715/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4715/timeline | null | null | true |
247 | https://api.github.com/repos/huggingface/datasets/issues/4714 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4714/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4714/comments | https://api.github.com/repos/huggingface/datasets/issues/4714/events | https://github.com/huggingface/datasets/pull/4714 | 1,309,265,682 | PR_kwDODunzps47o0YG | 4,714 | Fix named split sorting and remove unnecessary casting | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'
"hahaha what a timing, I added my comment right after you merged x)\r\n\r\nyou can ignore my (nit), it's fine"
'Sorry, just too sync... :sweat_smile: '] | 2022-07-19 09:48:28 | 2022-07-22 09:39:45 | 2022-07-22 09:10:57 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4714', 'html_url': 'https://github.com/huggingface/datasets/pull/4714', 'diff_url': 'https://github.com/huggingface/datasets/pull/4714.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4714.patch', 'merged_at': datetime.datetime(2022, 7, 22, 9, 10, 57)} | This PR:
- makes `NamedSplit` sortable: so that `sorted()` can be called on them
- removes unnecessary `sorted()` on `dict.keys()`: `dict_keys` view is already like a `set`
- removes unnecessary casting of `NamedSplit` to `str` | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4714/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4714/timeline | null | null | true |
248 | https://api.github.com/repos/huggingface/datasets/issues/4713 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4713/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4713/comments | https://api.github.com/repos/huggingface/datasets/issues/4713/events | https://github.com/huggingface/datasets/pull/4713 | 1,309,184,756 | PR_kwDODunzps47ojC1 | 4,713 | Document installation of sox OS dependency for audio | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-19 08:42:35 | 2022-07-21 08:16:59 | 2022-07-21 08:04:15 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4713', 'html_url': 'https://github.com/huggingface/datasets/pull/4713', 'diff_url': 'https://github.com/huggingface/datasets/pull/4713.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4713.patch', 'merged_at': datetime.datetime(2022, 7, 21, 8, 4, 15)} | The `sox` OS package needs being installed manually using the distribution package manager.
This PR adds this explanation to the docs. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4713/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4713/timeline | null | null | true |
249 | https://api.github.com/repos/huggingface/datasets/issues/4712 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4712/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4712/comments | https://api.github.com/repos/huggingface/datasets/issues/4712/events | https://github.com/huggingface/datasets/pull/4712 | 1,309,177,302 | PR_kwDODunzps47ohdr | 4,712 | Highlight non-commercial license in amazon_reviews_multi dataset card | {'login': 'sbroadhurst-hf', 'id': 108879611, 'node_id': 'U_kgDOBn1e-w', 'avatar_url': 'https://avatars.githubusercontent.com/u/108879611?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/sbroadhurst-hf', 'html_url': 'https://github.com/sbroadhurst-hf', 'followers_url': 'https://api.github.com/users/sbroadhurst-hf/followers', 'following_url': 'https://api.github.com/users/sbroadhurst-hf/following{/other_user}', 'gists_url': 'https://api.github.com/users/sbroadhurst-hf/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/sbroadhurst-hf/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/sbroadhurst-hf/subscriptions', 'organizations_url': 'https://api.github.com/users/sbroadhurst-hf/orgs', 'repos_url': 'https://api.github.com/users/sbroadhurst-hf/repos', 'events_url': 'https://api.github.com/users/sbroadhurst-hf/events{/privacy}', 'received_events_url': 'https://api.github.com/users/sbroadhurst-hf/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-19 08:36:20 | 2022-07-27 16:09:40 | 2022-07-27 15:57:41 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4712', 'html_url': 'https://github.com/huggingface/datasets/pull/4712', 'diff_url': 'https://github.com/huggingface/datasets/pull/4712.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4712.patch', 'merged_at': datetime.datetime(2022, 7, 27, 15, 57, 41)} | Highlight that the licence granted by Amazon only covers non-commercial research use. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4712/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4712/timeline | null | null | true |
250 | https://api.github.com/repos/huggingface/datasets/issues/4711 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4711/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4711/comments | https://api.github.com/repos/huggingface/datasets/issues/4711/events | https://github.com/huggingface/datasets/issues/4711 | 1,309,138,570 | I_kwDODunzps5OB96K | 4,711 | Document how to create a dataset loading script for audio/vision | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892861, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODYx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/documentation', 'name': 'documentation', 'color': '0075ca', 'default': True, 'description': 'Improvements or additions to documentation'}] | open | false | null | [] | null | [] | 2022-07-19 08:03:40 | 2022-08-01 15:08:11 | null | MEMBER | null | null | null | Currently, in our docs for Audio/Vision/Text, we explain how to:
- Load data
- Process data
However we only explain how to *Create a dataset loading script* for text data.
I think it would be useful that we add the same for Audio/Vision as these have some specificities different from Text.
See, for example:
- #4697
- and comment there: https://github.com/huggingface/datasets/issues/4697#issuecomment-1191502492
CC: @stevhliu
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4711/reactions', 'total_count': 4, '+1': 4, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4711/timeline | null | null | false |
251 | https://api.github.com/repos/huggingface/datasets/issues/4710 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4710/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4710/comments | https://api.github.com/repos/huggingface/datasets/issues/4710/events | https://github.com/huggingface/datasets/pull/4710 | 1,308,958,525 | PR_kwDODunzps47ny0L | 4,710 | Add object detection processing tutorial | {'login': 'nateraw', 'id': 32437151, 'node_id': 'MDQ6VXNlcjMyNDM3MTUx', 'avatar_url': 'https://avatars.githubusercontent.com/u/32437151?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/nateraw', 'html_url': 'https://github.com/nateraw', 'followers_url': 'https://api.github.com/users/nateraw/followers', 'following_url': 'https://api.github.com/users/nateraw/following{/other_user}', 'gists_url': 'https://api.github.com/users/nateraw/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/nateraw/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/nateraw/subscriptions', 'organizations_url': 'https://api.github.com/users/nateraw/orgs', 'repos_url': 'https://api.github.com/users/nateraw/repos', 'events_url': 'https://api.github.com/users/nateraw/events{/privacy}', 'received_events_url': 'https://api.github.com/users/nateraw/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'
"Great idea! Now that we have more than one task, it makes sense to separate image classification and object detection so it'll be easier for users to follow."
'@lhoestq do we want to do that in this PR, or should we merge it and let @stevhliu reorganize separately? '] | 2022-07-19 04:23:46 | 2022-07-21 20:10:35 | 2022-07-21 19:56:42 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4710', 'html_url': 'https://github.com/huggingface/datasets/pull/4710', 'diff_url': 'https://github.com/huggingface/datasets/pull/4710.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4710.patch', 'merged_at': datetime.datetime(2022, 7, 21, 19, 56, 42)} | The following adds a quick guide on how to process object detection datasets with `albumentations`. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4710/reactions', 'total_count': 2, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 2, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4710/timeline | null | null | true |
252 | https://api.github.com/repos/huggingface/datasets/issues/4709 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4709/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4709/comments | https://api.github.com/repos/huggingface/datasets/issues/4709/events | https://github.com/huggingface/datasets/issues/4709 | 1,308,633,093 | I_kwDODunzps5OACgF | 4,709 | WMT21 & WMT22 | {'login': 'Muennighoff', 'id': 62820084, 'node_id': 'MDQ6VXNlcjYyODIwMDg0', 'avatar_url': 'https://avatars.githubusercontent.com/u/62820084?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/Muennighoff', 'html_url': 'https://github.com/Muennighoff', 'followers_url': 'https://api.github.com/users/Muennighoff/followers', 'following_url': 'https://api.github.com/users/Muennighoff/following{/other_user}', 'gists_url': 'https://api.github.com/users/Muennighoff/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/Muennighoff/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/Muennighoff/subscriptions', 'organizations_url': 'https://api.github.com/users/Muennighoff/orgs', 'repos_url': 'https://api.github.com/users/Muennighoff/repos', 'events_url': 'https://api.github.com/users/Muennighoff/events{/privacy}', 'received_events_url': 'https://api.github.com/users/Muennighoff/received_events', 'type': 'User', 'site_admin': False} | [{'id': 2067376369, 'node_id': 'MDU6TGFiZWwyMDY3Mzc2MzY5', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset%20request', 'name': 'dataset request', 'color': 'e99695', 'default': False, 'description': 'Requesting to add a new dataset'}] | open | false | null | [] | null | ["Hi ! That would be awesome to have them indeed, thanks for opening this issue\r\n\r\nI just added you to the WMT org on the HF Hub if you're interested in adding those datasets.\r\n\r\nFeel free to create a dataset repository for each dataset and upload the data files there :) preferably in ZIP archives instead of TAR archives (the current WMT scripts don't support streaming TAR archives, so it would break the dataset preview). We've also had issues with the `statmt.org` host (data unavailable, slow download speed), that's why I think it's better if we re-host the files on the Hub.\r\n\r\n`wmt21` (and wmt22) can be added in this GitHub repository I think, for consistency with the previous ones.\r\nTo add it, you can copy paste the code of the previous one (e.g. wmt19), and add the new data:\r\n- in wmt_utils.py, add the new data subsets. You need to provide the download URLs, as well as the target and source languages\r\n- in wmt21.py (renamed from wmt19.py), you can specify the subsets that WMT21 uses (i.e. the one you just added)\r\n- in wmt_utils.py, define the python function that must be used to parse the subsets you added. To do so, you must go in `_generate_examples` and chose the proper `sub_generator` based on the subset name. For example, the `paracrawl_v3` subset uses the `_parse_tmx` function:\r\n\r\nhttps://github.com/huggingface/datasets/blob/ede72d3f9796339701ec59899c7c31d2427046fb/datasets/wmt19/wmt_utils.py#L834-L835\r\n\r\nHopefully the data is in a format that is already supported and there's no need to write a new `_parse_*` function for the new subsets. Let me know if you have questions or if I can help :)"] | 2022-07-18 21:05:33 | 2022-07-19 10:07:44 | null | NONE | null | null | null | ## Adding a Dataset
- **Name:** WMT21 & WMT22
- **Description:** We are going to have three tracks: two small tasks and a large task.
The small tracks evaluate translation between fairly related languages and English (all pairs). The large track uses 101 languages.
- **Paper:** /
- **Data:** https://statmt.org/wmt21/large-scale-multilingual-translation-task.html https://statmt.org/wmt22/large-scale-multilingual-translation-task.html
- **Motivation:** Many more languages than previous WMT versions - Could be very high impact
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md).
I could also tackle this. I saw the existing logic for WMT models is a bit complex (datasets are stored on the wmt account & retrieved in separate wmt datasets afaict). How long do you think it would take me? @lhoestq
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4709/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4709/timeline | null | null | false |
253 | https://api.github.com/repos/huggingface/datasets/issues/4708 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4708/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4708/comments | https://api.github.com/repos/huggingface/datasets/issues/4708/events | https://github.com/huggingface/datasets/pull/4708 | 1,308,279,700 | PR_kwDODunzps47lewm | 4,708 | Fix require torchaudio and refactor test requirements | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-18 17:24:28 | 2022-07-22 06:30:56 | 2022-07-22 06:18:11 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4708', 'html_url': 'https://github.com/huggingface/datasets/pull/4708', 'diff_url': 'https://github.com/huggingface/datasets/pull/4708.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4708.patch', 'merged_at': datetime.datetime(2022, 7, 22, 6, 18, 11)} | Currently there is a bug in `require_torchaudio` (indeed it is requiring `sox` instead):
```python
def require_torchaudio(test_case):
if find_spec("sox") is None:
...
```
The bug was introduced by:
- #3685
- Commit: https://github.com/huggingface/datasets/pull/3685/commits/b5a3e7122d49c4dcc9333b1d8d18a833fc04b940
which moved
```python
require_sndfile = pytest.mark.skipif(
# In Windows and OS X, soundfile installs sndfile
(sys.platform != "linux" and find_spec("soundfile") is None)
# In Linux, soundfile throws RuntimeError if sndfile not installed with distribution package manager
or (sys.platform == "linux" and find_library("sndfile") is None),
reason="Test requires 'sndfile': `pip install soundfile`; "
"Linux requires sndfile installed with distribution package manager, e.g.: `sudo apt-get install libsndfile1`",
)
require_sox = pytest.mark.skipif(
find_library("sox") is None,
reason="Test requires 'sox'; only available in non-Windows, e.g.: `sudo apt-get install sox`",
)
require_torchaudio = pytest.mark.skipif(find_spec("torchaudio") is None, reason="Test requires 'torchaudio'")
```
to
```python
def require_sndfile(test_case):
"""
Decorator marking a test that requires soundfile.
These tests are skipped when soundfile isn't installed.
"""
if (sys.platform != "linux" and find_spec("soundfile") is None) or (
sys.platform == "linux" and find_library("sndfile") is None
):
test_case = unittest.skip(
"test requires 'sndfile': `pip install soundfile`; "
"Linux requires sndfile installed with distribution package manager, e.g.: `sudo apt-get install libsndfile1`",
)(test_case)
return test_case
def require_sox(test_case):
"""
Decorator marking a test that requires sox.
These tests are skipped when sox isn't installed.
"""
if find_library("sox") is None:
return unittest.skip("test requires 'sox'; only available in non-Windows, e.g.: `sudo apt-get install sox`")(
test_case
)
return test_case
def require_torchaudio(test_case):
"""
Decorator marking a test that requires torchaudio.
These tests are skipped when torchaudio isn't installed.
"""
if find_spec("sox") is None:
return unittest.skip("test requires 'torchaudio'")(test_case)
return test_case
```
This PR;
- fixes the bug in `require_torchaudio`
- refactors the test requirements back to using `pytest` instead of `unittest`
- the text in `pytest.skipif` `reason` can be used if needed in a test body: `require_torchaudio.kwargs["reason"]` | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4708/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4708/timeline | null | null | true |
254 | https://api.github.com/repos/huggingface/datasets/issues/4707 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4707/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4707/comments | https://api.github.com/repos/huggingface/datasets/issues/4707/events | https://github.com/huggingface/datasets/issues/4707 | 1,308,251,405 | I_kwDODunzps5N-lUN | 4,707 | Dataset Viewer issue for TheNoob3131/mosquito-data | {'login': 'thenerd31', 'id': 53668030, 'node_id': 'MDQ6VXNlcjUzNjY4MDMw', 'avatar_url': 'https://avatars.githubusercontent.com/u/53668030?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/thenerd31', 'html_url': 'https://github.com/thenerd31', 'followers_url': 'https://api.github.com/users/thenerd31/followers', 'following_url': 'https://api.github.com/users/thenerd31/following{/other_user}', 'gists_url': 'https://api.github.com/users/thenerd31/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/thenerd31/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/thenerd31/subscriptions', 'organizations_url': 'https://api.github.com/users/thenerd31/orgs', 'repos_url': 'https://api.github.com/users/thenerd31/repos', 'events_url': 'https://api.github.com/users/thenerd31/events{/privacy}', 'received_events_url': 'https://api.github.com/users/thenerd31/received_events', 'type': 'User', 'site_admin': False} | [{'id': 3470211881, 'node_id': 'LA_kwDODunzps7O1zsp', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer', 'name': 'dataset-viewer', 'color': 'E5583E', 'default': False, 'description': 'Related to the dataset viewer on huggingface.co'}] | closed | false | {'login': 'severo', 'id': 1676121, 'node_id': 'MDQ6VXNlcjE2NzYxMjE=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1676121?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/severo', 'html_url': 'https://github.com/severo', 'followers_url': 'https://api.github.com/users/severo/followers', 'following_url': 'https://api.github.com/users/severo/following{/other_user}', 'gists_url': 'https://api.github.com/users/severo/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/severo/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/severo/subscriptions', 'organizations_url': 'https://api.github.com/users/severo/orgs', 'repos_url': 'https://api.github.com/users/severo/repos', 'events_url': 'https://api.github.com/users/severo/events{/privacy}', 'received_events_url': 'https://api.github.com/users/severo/received_events', 'type': 'User', 'site_admin': False} | [{'login': 'severo', 'id': 1676121, 'node_id': 'MDQ6VXNlcjE2NzYxMjE=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1676121?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/severo', 'html_url': 'https://github.com/severo', 'followers_url': 'https://api.github.com/users/severo/followers', 'following_url': 'https://api.github.com/users/severo/following{/other_user}', 'gists_url': 'https://api.github.com/users/severo/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/severo/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/severo/subscriptions', 'organizations_url': 'https://api.github.com/users/severo/orgs', 'repos_url': 'https://api.github.com/users/severo/repos', 'events_url': 'https://api.github.com/users/severo/events{/privacy}', 'received_events_url': 'https://api.github.com/users/severo/received_events', 'type': 'User', 'site_admin': False}] | null | ['Thanks for reporting. I refreshed the dataset viewer and it now works as expected.\r\n\r\nhttps://huggingface.co/datasets/TheNoob3131/mosquito-data\r\n\r\n<img width="1135" alt="Capture d’écran 2022-07-18 à 13 15 22" src="https://user-images.githubusercontent.com/1676121/179566497-e47f1a27-fd84-4a8d-9d7f-2e0f2da803df.png">\r\n\r\nWe will investigate why it occurred in the first place\r\n'
'By chance, could you provide some details about the operations done on the dataset: was it private? gated?'
'Yes, it was a private dataset, and when I made it public, the Dataset Preview did not work. \r\n\r\nHowever, now when I make the dataset private, it says that the Dataset Preview has been disabled. Why is this?'
'Thanks for the details. For now, the dataset viewer is always disabled on private datasets (see https://huggingface.co/docs/hub/datasets-viewer for more details)'
"Hi, it was working fine for a few hours, but then I can't see the dataset viewer again (public dataset). Why is this still happening?\r\nIt's the same error too:\r\n![image](https://user-images.githubusercontent.com/53668030/179602465-f220f971-d3aa-49ba-a31b-60510f4c2a89.png)\r\n"
"OK? This is a bug, thanks for help spotting and reproducing it (it occurs when a dataset is switched to private, then to public). We will be working on it, meanwhile, I've restored the dataset viewer manually again."] | 2022-07-18 17:07:19 | 2022-07-18 19:44:46 | 2022-07-18 17:15:50 | NONE | null | null | null | ### Link
_No response_
### Description
Getting this error when trying to view dataset preview:
Message: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/TheNoob3131/mosquito-data/resolve/8aceebd6c4a359d216d10ef020868bd9e8c986dd/0_Africa_train.csv')
### Owner
_No response_ | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4707/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4707/timeline | null | completed | false |
255 | https://api.github.com/repos/huggingface/datasets/issues/4706 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4706/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4706/comments | https://api.github.com/repos/huggingface/datasets/issues/4706/events | https://github.com/huggingface/datasets/pull/4706 | 1,308,198,454 | PR_kwDODunzps47lNBg | 4,706 | Fix empty examples in xtreme dataset for bucc18 config | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'
'I guess the report link is this instead: https://huggingface.co/datasets/xtreme/discussions/1'] | 2022-07-18 16:22:46 | 2022-07-19 06:41:14 | 2022-07-19 06:29:17 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4706', 'html_url': 'https://github.com/huggingface/datasets/pull/4706', 'diff_url': 'https://github.com/huggingface/datasets/pull/4706.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4706.patch', 'merged_at': datetime.datetime(2022, 7, 19, 6, 29, 17)} | As reported in https://huggingface.co/muibk, there are empty examples in xtreme/bucc18.de
I applied your fix @mustaszewski
I also used a dict to make the dataset generation much faster | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4706/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4706/timeline | null | null | true |
256 | https://api.github.com/repos/huggingface/datasets/issues/4705 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4705/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4705/comments | https://api.github.com/repos/huggingface/datasets/issues/4705/events | https://github.com/huggingface/datasets/pull/4705 | 1,308,161,794 | PR_kwDODunzps47lFDo | 4,705 | Fix crd3 | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-18 15:53:44 | 2022-07-21 17:18:44 | 2022-07-21 17:06:30 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4705', 'html_url': 'https://github.com/huggingface/datasets/pull/4705', 'diff_url': 'https://github.com/huggingface/datasets/pull/4705.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4705.patch', 'merged_at': datetime.datetime(2022, 7, 21, 17, 6, 30)} | As reported in https://huggingface.co/datasets/crd3/discussions/1#62cc377073b2512b81662794, each split of the dataset was containing the same data. This issues comes from a bug in the dataset script
I fixed it and also uploaded the data to hf.co to make the dataset work in streaming mode | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4705/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4705/timeline | null | null | true |
257 | https://api.github.com/repos/huggingface/datasets/issues/4704 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4704/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4704/comments | https://api.github.com/repos/huggingface/datasets/issues/4704/events | https://github.com/huggingface/datasets/pull/4704 | 1,308,147,876 | PR_kwDODunzps47lCFt | 4,704 | Skip tests only for lz4/zstd params if not installed | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-18 15:41:40 | 2022-07-19 13:02:31 | 2022-07-19 12:49:18 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4704', 'html_url': 'https://github.com/huggingface/datasets/pull/4704', 'diff_url': 'https://github.com/huggingface/datasets/pull/4704.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4704.patch', 'merged_at': datetime.datetime(2022, 7, 19, 12, 49, 18)} | Currently, if `zstandard` or `lz4` are not installed, `test_compression_filesystems` and `test_streaming_dl_manager_extract_all_supported_single_file_compression_types` are skipped for all compression format parameters.
This PR fixes these tests, so that if `zstandard` or `lz4` are not installed, the tests are skipped only for the corresponding compression parameters (`zstd` or `lz4`), whereas the tests are not skipped for all the other compression parameters (`gzip`, `xz` and `bz2`).
Related to:
- #4688 | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4704/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4704/timeline | null | null | true |
258 | https://api.github.com/repos/huggingface/datasets/issues/4703 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4703/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4703/comments | https://api.github.com/repos/huggingface/datasets/issues/4703/events | https://github.com/huggingface/datasets/pull/4703 | 1,307,844,097 | PR_kwDODunzps47kABf | 4,703 | Make cast in `from_pandas` more robust | {'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/followers', 'following_url': 'https://api.github.com/users/mariosasko/following{/other_user}', 'gists_url': 'https://api.github.com/users/mariosasko/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mariosasko/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mariosasko/subscriptions', 'organizations_url': 'https://api.github.com/users/mariosasko/orgs', 'repos_url': 'https://api.github.com/users/mariosasko/repos', 'events_url': 'https://api.github.com/users/mariosasko/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mariosasko/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-18 11:55:49 | 2022-07-22 11:17:42 | 2022-07-22 11:05:24 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4703', 'html_url': 'https://github.com/huggingface/datasets/pull/4703', 'diff_url': 'https://github.com/huggingface/datasets/pull/4703.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4703.patch', 'merged_at': datetime.datetime(2022, 7, 22, 11, 5, 24)} | Make the cast in `from_pandas` more robust (as it was done for the packaged modules in https://github.com/huggingface/datasets/pull/4364)
This should be useful in situations like [this one](https://discuss.huggingface.co/t/loading-custom-audio-dataset-and-fine-tuning-model/8836/4). | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4703/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4703/timeline | null | null | true |
259 | https://api.github.com/repos/huggingface/datasets/issues/4702 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4702/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4702/comments | https://api.github.com/repos/huggingface/datasets/issues/4702/events | https://github.com/huggingface/datasets/issues/4702 | 1,307,793,811 | I_kwDODunzps5N81mT | 4,702 | Domain specific dataset discovery on the Hugging Face hub | {'login': 'davanstrien', 'id': 8995957, 'node_id': 'MDQ6VXNlcjg5OTU5NTc=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8995957?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/davanstrien', 'html_url': 'https://github.com/davanstrien', 'followers_url': 'https://api.github.com/users/davanstrien/followers', 'following_url': 'https://api.github.com/users/davanstrien/following{/other_user}', 'gists_url': 'https://api.github.com/users/davanstrien/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/davanstrien/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/davanstrien/subscriptions', 'organizations_url': 'https://api.github.com/users/davanstrien/orgs', 'repos_url': 'https://api.github.com/users/davanstrien/repos', 'events_url': 'https://api.github.com/users/davanstrien/events{/privacy}', 'received_events_url': 'https://api.github.com/users/davanstrien/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}] | open | false | null | [] | null | ["Hi! I added a link to this issue in our internal request for adding keywords/topics to the Hub, which is identical to the `topic tags` solution. The `collections` solution seems too complex (as you point out). Regarding the `domain tags` solution, we primarily focus on machine learning, so I'm not sure if it's a good idea to make our current taxonomy more complex."
"> Hi! I added a link to this issue in our internal request for adding keywords/topics to the Hub, which is identical to the `topic tags` solution. The `collections` solution seems too complex (as you point out). Regarding the `domain tags` solution, we primarily focus on machine learning, so I'm not sure if it's a good idea to make our current taxonomy more complex.\r\n\r\nThanks, for letting me know. Will you allow the topic tags to be user-generated or only chosen from a list?"
'Thanks for opening this issue @davanstrien.\r\n\r\nAs we discussed last week, the tag approach would be in principle the simpler to be implemented, either the domain tag (with closed vocabulary: more reliable but also more rigid), or the topic tag (with open vocabulary: more flexible for user needs)'
'Hi @davanstrien If i remember correctly this was also discussed inside a hf.co Discussion, would you be able to link it here too?\r\n\r\n(where i suggested using `tags: - foo - bar` IIRC.\r\n\r\nThanks a ton!'
"> Hi @davanstrien If i remember correctly this was also discussed inside a hf.co Discussion, would you be able to link it here too?\r\n> \r\n> (where i suggested using `tags: - foo - bar` IIRC.\r\n> \r\n> Thanks a ton!\r\n\r\nThis doesn't ring a bell - I did a quick search of https://discuss.huggingface.co but didn't find anything. \r\n\r\nThe `tags: ` approach sounds like a good option for this. It would be especially nice if these could suggest existing tags, but this probably won't be easily possible through the current interface. \r\n"
'I opened a PR to add "tags" to the YAML validator:\r\nhttps://github.com/huggingface/datasets/pull/4716\r\n\r\nI also added "tags" to the [tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging), with suggestions like "bio" or "newspapers"'
'Thanks @lhoestq for the initiative.\r\n \r\nJust one question: are "tags" already supported on the Hub? \r\n\r\nI think they aren\'t. Thus, the Hub should support them so that they are properly displayed.'
"I think they're not displayed, but at least it should enable users to filter by tag in using `huggingface_hub` or using the appropriate query params on the website (not sure if it's possible yet though)"
"> I think they're not displayed, but at least it should enable users to filter by tag in using `huggingface_hub` or using the appropriate query params on the website (not sure if it's possible yet though)\r\n\r\nI think this would already be a helpful start. I'm happy to try this out with the datasets added to https://huggingface.co/organizations/biglam and use the `huggingface_hub` to filter those datasets using the tags. "] | 2022-07-18 11:14:03 | 2022-07-19 15:18:11 | null | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
## The problem
The datasets hub currently has `8,239` datasets. These datasets span a wide range of different modalities and tasks (currently with a bias towards textual data).
There are various ways of identifying datasets that may be relevant for a particular use case:
- searching
- various filters
Currently, however, there isn't an easy way to identify datasets belonging to a specific domain. For example, I want to browse machine learning datasets related to 'social science' or 'climate change research'.
The ability to identify datasets relating to a specific domain has come up in discussions around the [BigLA](https://github.com/bigscience-workshop/lam/) datasets hackathon https://github.com/bigscience-workshop/lam/discussions/31#discussioncomment-3123610. As part of the hackathon, we're currently collecting datasets related to Libraries, Archives and Museums and making them available via the hub. We currently do this under a Hugging Face organization (https://huggingface.co/biglam). However, going forward, I can see some of these datasets being migrated to sit under an organization that is the custodian of the dataset (for example, a national library the data was originally from). At this point, it becomes more difficult to quickly identify datasets from this domain without relying on search.
This is also related to some existing issues on Github related to metadata on the hub:
- https://github.com/huggingface/datasets/issues/3625
- https://github.com/huggingface/datasets/issues/3877
**Describe the solution you'd like**
### Some possible solutions that may help with this:
#### Enable domain tags (from a controlled vocabulary)
- This would add metadata field to the YAML for the domain a dataset relates to
- Advantages:
- the list is controlled, allowing it to be more easily integrated into the datasets tag app (https://huggingface.co/space/huggingface/datasets-tagging)
- the controlled vocabulary could align with an existing controlled vocabulary
- this additional metadata can be used to perform filtering by domain
- disadvantages
- choosing the best controlled vocab may be difficult
- there are many datasets that are likely to fit into the 'machine learning' domain (i.e. there is a long tail of datasets that aren't in more 'generic' machine learning domain
#### Enable topic tags (user-generated)
Enable 'free form' topic tags for datasets and models. This would be closer to GitHub's repository topics which can be chosen from a controlled list (https://github.com/topics/) but can also be more user/org specific. This could potentially be useful for organizations to also manage their own models and datasets as the number they hold in their org grows. For example, they may create 'topic tags' for a specific project, so it's clearer which datasets /models are related to that project.
#### Collections
This solution would likely be the biggest shift and may require significant changes in the hub fronted. Collections could work in several different ways but would include:
Users can curate particular datasets, models, spaces, etc., into a collection. For example, they may create a collection of 'historic newspapers suitable for training language models'. These collections would not be mutually exclusive, i.e. a dataset can belong to zero, one or many collections. Collections can also potentially be nested under other collections.
This is fairly common on other data reposotiores for example the following collections:
<img width="293" alt="Screenshot 2022-07-18 at 11 50 44" src="https://user-images.githubusercontent.com/8995957/179496445-963ed122-5e26-4574-96e8-41081bce3e2b.png">
all belong under a higher level collection (https://bl.iro.bl.uk/collections/353c908d-b495-4413-b047-87236d2573e3?locale=en).
There are different models one could use for how these collections could be created:
- only within an org
- for any dataset/model
- the owner or a dataset/model has to agree to be added to a collection
- a collection owner can have people suggest additions to their collection
- other models....
These collections could be thematic, related to particular training approaches, curate models with particular inference properties etc. Whilst some of these features may duplicate current/or future tag filters on the hub, they offer the advantage of being flexible and not having to predict what users will want to do upfront.
There is also potential for automating the creation of these collections based on existing metadata. For example, one could collect models trained on a collection of datasets so for example, if we had a collection of 'historic newspapers suitable for training language models' that contained 30 datasets, we could create another collection 'historic newspaper language models' that takes any model on the hub whose metadata says it used one or more of those 30 datasets.
There is also the option of exploring ML approaches to suggest models/datasets may be relevant to a particular collection.
This approach is likely to be quite difficult to implement well and would require significant thought. There is also likely to be a benefit in doing quite a bit of upfront work in curating useful collections to demonstrate the benefits of collections.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
It is possible to collate this information externally, i.e. one could link back to the relevant models/datasets from an external platform.
**Additional context**
Add any other context about the feature request here.
I'm cc'ing others involved in the BigLAM hackathon who may also have thoughts @cakiki @clancyoftheoverflow @albertvillanova | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4702/reactions', 'total_count': 2, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 1, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4702/timeline | null | null | false |
260 | https://api.github.com/repos/huggingface/datasets/issues/4701 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4701/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4701/comments | https://api.github.com/repos/huggingface/datasets/issues/4701/events | https://github.com/huggingface/datasets/pull/4701 | 1,307,689,625 | PR_kwDODunzps47jeE9 | 4,701 | Added more information in the README about contributors of the Arabic Speech Corpus | {'login': 'nawarhalabi', 'id': 2845798, 'node_id': 'MDQ6VXNlcjI4NDU3OTg=', 'avatar_url': 'https://avatars.githubusercontent.com/u/2845798?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/nawarhalabi', 'html_url': 'https://github.com/nawarhalabi', 'followers_url': 'https://api.github.com/users/nawarhalabi/followers', 'following_url': 'https://api.github.com/users/nawarhalabi/following{/other_user}', 'gists_url': 'https://api.github.com/users/nawarhalabi/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/nawarhalabi/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/nawarhalabi/subscriptions', 'organizations_url': 'https://api.github.com/users/nawarhalabi/orgs', 'repos_url': 'https://api.github.com/users/nawarhalabi/repos', 'events_url': 'https://api.github.com/users/nawarhalabi/events{/privacy}', 'received_events_url': 'https://api.github.com/users/nawarhalabi/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | [] | 2022-07-18 09:48:03 | 2022-07-28 10:33:05 | 2022-07-28 10:33:05 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4701', 'html_url': 'https://github.com/huggingface/datasets/pull/4701', 'diff_url': 'https://github.com/huggingface/datasets/pull/4701.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4701.patch', 'merged_at': datetime.datetime(2022, 7, 28, 10, 33, 4)} | Added more information in the README about contributors and encouraged reading the thesis for more infos | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4701/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4701/timeline | null | null | true |
261 | https://api.github.com/repos/huggingface/datasets/issues/4700 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4700/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4700/comments | https://api.github.com/repos/huggingface/datasets/issues/4700/events | https://github.com/huggingface/datasets/pull/4700 | 1,307,599,161 | PR_kwDODunzps47jKNx | 4,700 | Support extract lz4 compressed data files | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-18 08:41:31 | 2022-07-18 14:43:59 | 2022-07-18 14:31:47 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4700', 'html_url': 'https://github.com/huggingface/datasets/pull/4700', 'diff_url': 'https://github.com/huggingface/datasets/pull/4700.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4700.patch', 'merged_at': datetime.datetime(2022, 7, 18, 14, 31, 47)} | null | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4700/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4700/timeline | null | null | true |
262 | https://api.github.com/repos/huggingface/datasets/issues/4699 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4699/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4699/comments | https://api.github.com/repos/huggingface/datasets/issues/4699/events | https://github.com/huggingface/datasets/pull/4699 | 1,307,555,592 | PR_kwDODunzps47jA6Z | 4,699 | Fix Authentification Error while streaming | {'login': 'hkjeon13', 'id': 37480967, 'node_id': 'MDQ6VXNlcjM3NDgwOTY3', 'avatar_url': 'https://avatars.githubusercontent.com/u/37480967?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/hkjeon13', 'html_url': 'https://github.com/hkjeon13', 'followers_url': 'https://api.github.com/users/hkjeon13/followers', 'following_url': 'https://api.github.com/users/hkjeon13/following{/other_user}', 'gists_url': 'https://api.github.com/users/hkjeon13/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/hkjeon13/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/hkjeon13/subscriptions', 'organizations_url': 'https://api.github.com/users/hkjeon13/orgs', 'repos_url': 'https://api.github.com/users/hkjeon13/repos', 'events_url': 'https://api.github.com/users/hkjeon13/events{/privacy}', 'received_events_url': 'https://api.github.com/users/hkjeon13/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['Hi, thanks for working on this, but the fix for this has already been merged in https://github.com/huggingface/datasets/pull/4608.'] | 2022-07-18 08:03:41 | 2022-07-20 13:10:44 | 2022-07-20 13:10:43 | NONE | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4699', 'html_url': 'https://github.com/huggingface/datasets/pull/4699', 'diff_url': 'https://github.com/huggingface/datasets/pull/4699.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4699.patch', 'merged_at': None} | I fixed a few errors when it occurs while streaming the private dataset on the Huggingface Hub.
```
from datasets import load_dataset
dataset = load_dataset(<repo_id>, use_auth_token=<private_token>, streaming=True)
for d in dataset['train']:
print(d)
break # this is for checking
```
This code is an example for streaming private datasets.
when the version of the datasets is 2.2.2, it works well but datasets>2.2.2 occurs error like this,
```
/usr/local/lib/python3.7/dist-packages/aiohttp/client_reqrep.py in raise_for_status(self)
1007 status=self.status,
1008 message=self.reason,
→ 1009 headers=self.headers,
1010 )
1011
ClientResponseError: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/.../train-00000-of-00001-168b451062c67c34.parquet')
```
(this is an example on the dataset has `parquet` extenstion)
It seems that the `xisfile `module in `download/streaming_download_manager.py` couldn't recognize the file on "https://huggingface.co/~".
so I add three lines.
With this change, there is no error anymore(but this code is ad-hoc). | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4699/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4699/timeline | null | null | true |
263 | https://api.github.com/repos/huggingface/datasets/issues/4698 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4698/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4698/comments | https://api.github.com/repos/huggingface/datasets/issues/4698/events | https://github.com/huggingface/datasets/pull/4698 | 1,307,539,585 | PR_kwDODunzps47i9gN | 4,698 | Enable streaming dataset to use the "all" split | {'login': 'cakiki', 'id': 3664563, 'node_id': 'MDQ6VXNlcjM2NjQ1NjM=', 'avatar_url': 'https://avatars.githubusercontent.com/u/3664563?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/cakiki', 'html_url': 'https://github.com/cakiki', 'followers_url': 'https://api.github.com/users/cakiki/followers', 'following_url': 'https://api.github.com/users/cakiki/following{/other_user}', 'gists_url': 'https://api.github.com/users/cakiki/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/cakiki/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/cakiki/subscriptions', 'organizations_url': 'https://api.github.com/users/cakiki/orgs', 'repos_url': 'https://api.github.com/users/cakiki/repos', 'events_url': 'https://api.github.com/users/cakiki/events{/privacy}', 'received_events_url': 'https://api.github.com/users/cakiki/received_events', 'type': 'User', 'site_admin': False} | [] | open | false | null | [] | null | ['The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4698). All of your documentation changes will be reflected on that endpoint.'
"@albertvillanova \r\nAdding the validation split causes these two `assert_called_once` assertions to fail with `AssertionError: Expected 'ArrowWriter' to have been called once. Called 2 times`:\r\n\r\nhttps://github.com/huggingface/datasets/blob/main/tests/test_builder.py#L548-L562\r\n\r\nIt might be better to create a new dummy generator for the streaming tests, WDYT? Alternatively we could test for `self.call_count` equalling 2."
'@cakiki have you read my comment in the issue page?\r\nhttps://github.com/huggingface/datasets/issues/4637#issuecomment-1175984812'
'Streaming with `split=all` seems to be working, will fix the failing test next'
'Not sure if marking the PR as "ready for review" actually notified you, so tagging @albertvillanova just in case :smiley_cat: '] | 2022-07-18 07:47:39 | 2022-08-10 08:35:33 | null | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4698', 'html_url': 'https://github.com/huggingface/datasets/pull/4698', 'diff_url': 'https://github.com/huggingface/datasets/pull/4698.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4698.patch', 'merged_at': None} | Fixes #4637 | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4698/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4698/timeline | null | null | true |
264 | https://api.github.com/repos/huggingface/datasets/issues/4697 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4697/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4697/comments | https://api.github.com/repos/huggingface/datasets/issues/4697/events | https://github.com/huggingface/datasets/issues/4697 | 1,307,332,253 | I_kwDODunzps5N7E6d | 4,697 | Trouble with streaming frgfm/imagenette vision dataset with TAR archive | {'login': 'frgfm', 'id': 26927750, 'node_id': 'MDQ6VXNlcjI2OTI3NzUw', 'avatar_url': 'https://avatars.githubusercontent.com/u/26927750?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/frgfm', 'html_url': 'https://github.com/frgfm', 'followers_url': 'https://api.github.com/users/frgfm/followers', 'following_url': 'https://api.github.com/users/frgfm/following{/other_user}', 'gists_url': 'https://api.github.com/users/frgfm/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/frgfm/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/frgfm/subscriptions', 'organizations_url': 'https://api.github.com/users/frgfm/orgs', 'repos_url': 'https://api.github.com/users/frgfm/repos', 'events_url': 'https://api.github.com/users/frgfm/events{/privacy}', 'received_events_url': 'https://api.github.com/users/frgfm/received_events', 'type': 'User', 'site_admin': False} | [{'id': 3287858981, 'node_id': 'MDU6TGFiZWwzMjg3ODU4OTgx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/streaming', 'name': 'streaming', 'color': 'fef2c0', 'default': False, 'description': ''}] | closed | false | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}] | null | ['Hi @frgfm, thanks for reporting.\r\n\r\nAs the error message says, streaming mode is not supported out of the box when the dataset contains TAR archive files.\r\n\r\nTo make the dataset streamable, you have to use `dl_manager.iter_archive`.\r\n\r\nThere are several examples in other datasets, e.g. food101: https://huggingface.co/datasets/food101/blob/main/food101.py\r\n\r\nAnd yes, as the link you pointed out, for the streaming to be possible, the metadata file must be loaded before all of the images:\r\n- either this is the case when iterating the archive (and you get the metadata file before the images)\r\n- or you have to extract the metadata file by hand and upload it separately to the Hub'
"Hi @albertvillanova :wave:\r\n\r\nThanks! Yeah I saw that but since I didn't have any metadata, I wasn't sure whether I should create them myself.\r\n\r\nSo one last question:\r\nWhat is the metadata supposed to be for archives? The relative path of all files in it?\r\n_(Sorry I'm a bit confused since it's quite hard to debug using the single error message from the data preview :sweat_smile: )_"
'Hi @frgfm, streaming a dataset that contains a TAR file requires some tweaks because (contrary to ZIP files), tha TAR archive does not allow random access to any of the contained member files. Instead they have to be accessed sequentially (in the order in which they were put into the TAR file when created) and yielded.\r\n\r\nSo when iterating over the TAR file content, when an image file is found, we need to yield it (and not keeping it in memory, which will require huge RAM memory for large datasets). But when yielding an image file, we also need to yield with it what we call "metadata": the class label, and other textual information (for example, for audio files, sometimes we also add info such as the speaker ID, their sex, their age,...).\r\n\r\nAll this information usually is stored in what we call the metadata file: either a JSON or a CSV/TSV file.\r\n\r\nBut if this is also inside the TAR archive, we need to find this file in the first place when iterating the TAR archive, so that we already have this information when we find an image file and we can yield the image file and its metadata info.\r\n\r\nTherefore:\r\n- either the TAR archive contains the metadata file as the first member when iterating it (something we cannot change as it is done at the creation of the TAR file)\r\n- or if not, then we need to have the metadata file elsewhere\r\n - in these cases, what we do (if the dataset license allows it) is:\r\n - we download the TAR file locally, we extract the metadata file and we host the metadata on the Hub\r\n - we modify the dataset loading script so that it first downloads the metadata file (and reads it) and only then starts iterating the content of the TAR archive file\r\n\r\nSee an example of this process we recently did for "google/fleurs" (their metadata files for "train" were at the end of the TAR archives, after all audio files): https://huggingface.co/datasets/google/fleurs/discussions/4\r\n- we uploaded the metadata file to the Hub\r\n- we adapted the loading script to use it'
'Hi @albertvillanova :wave: \r\n\r\nThanks, since my last message, I went through the repo of https://huggingface.co/datasets/food101/blob/main/food101.py and managed to get it to work in the end :pray: \r\n\r\nHere it is: https://huggingface.co/datasets/frgfm/imagenette\r\n\r\nI appreciate you opening an issue to document the process, it might help a few!'
"Great to see that you manage to make your dataset streamable. :rocket: \r\n\r\nI'm closing this issue, as for the docs update there is another issue opened:\r\n- #4711"] | 2022-07-18 02:51:09 | 2022-08-01 15:10:57 | 2022-08-01 15:10:57 | NONE | null | null | null | ### Link
https://huggingface.co/datasets/frgfm/imagenette
### Description
Hello there :wave:
Thanks for the amazing work you've done with HF Datasets! I've just started playing with it, and managed to upload my first dataset. But for the second one, I'm having trouble with the preview since there is some archive extraction involved :sweat_smile:
Basically, I get a:
```
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://s3.amazonaws.com/fast-ai-imageclas/imagenette2.tgz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.
```
I've tried several things and checked this issue https://github.com/huggingface/datasets/issues/4181 as well, but no luck so far!
Could you point me in the right direction please? :pray:
### Owner
Yes | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4697/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4697/timeline | null | completed | false |
265 | https://api.github.com/repos/huggingface/datasets/issues/4696 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4696/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4696/comments | https://api.github.com/repos/huggingface/datasets/issues/4696/events | https://github.com/huggingface/datasets/issues/4696 | 1,307,183,099 | I_kwDODunzps5N6gf7 | 4,696 | Cannot load LinCE dataset | {'login': 'finiteautomata', 'id': 167943, 'node_id': 'MDQ6VXNlcjE2Nzk0Mw==', 'avatar_url': 'https://avatars.githubusercontent.com/u/167943?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/finiteautomata', 'html_url': 'https://github.com/finiteautomata', 'followers_url': 'https://api.github.com/users/finiteautomata/followers', 'following_url': 'https://api.github.com/users/finiteautomata/following{/other_user}', 'gists_url': 'https://api.github.com/users/finiteautomata/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/finiteautomata/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/finiteautomata/subscriptions', 'organizations_url': 'https://api.github.com/users/finiteautomata/orgs', 'repos_url': 'https://api.github.com/users/finiteautomata/repos', 'events_url': 'https://api.github.com/users/finiteautomata/events{/privacy}', 'received_events_url': 'https://api.github.com/users/finiteautomata/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | closed | false | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}] | null | ['Hi @finiteautomata, thanks for reporting.\r\n\r\nUnfortunately, I\'m not able to reproduce your issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n ...: dataset = load_dataset("lince", "ner_spaeng")\r\nDownloading builder script: 20.8kB [00:00, 9.09MB/s] \r\nDownloading metadata: 31.2kB [00:00, 13.5MB/s] \r\nDownloading and preparing dataset lince/ner_spaeng (download: 2.93 MiB, generated: 18.45 MiB, post-processed: Unknown size, total: 21.38 MiB) to .../.cache/huggingface/datasets/lince/ner_spaeng/1.0.0/10d41747f55f0849fa84ac579ea1acfa7df49aa2015b60426bc459c111b3d589...\r\nDownloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.08M/3.08M [00:01<00:00, 2.73MB/s]\r\nDataset lince downloaded and prepared to .../.cache/huggingface/datasets/lince/ner_spaeng/1.0.0/10d41747f55f0849fa84ac579ea1acfa7df49aa2015b60426bc459c111b3d589. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 630.66it/s]\r\n\r\nIn [2]: dataset\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: [\'idx\', \'words\', \'lid\', \'ner\'],\r\n num_rows: 33611\r\n })\r\n validation: Dataset({\r\n features: [\'idx\', \'words\', \'lid\', \'ner\'],\r\n num_rows: 10085\r\n })\r\n test: Dataset({\r\n features: [\'idx\', \'words\', \'lid\', \'ner\'],\r\n num_rows: 23527\r\n })\r\n})\r\n``` \r\n\r\nPlease note that for this dataset, the original data files are not hosted on the Hugging Face Hub, but on https://ritual.uh.edu\r\nAnd sometimes, the server might be temporarily unavailable, as your error message said (trying to connect to the server timed out):\r\n```\r\nConnectionError: Couldn\'t reach https://ritual.uh.edu/lince/libaccess/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9/ner_spaeng.zip (ConnectTimeout(MaxRetryError("HTTPSConnectionPool(host=\'ritual.uh.edu\', port=443): Max retries exceeded with url: /lince/libaccess/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9/ner_spaeng.zip (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7feb1c45a690>, \'Connection to ritual.uh.edu timed out. (connect timeout=100)\'))")))\r\n```\r\nIn these cases you could:\r\n- either contact the owners of the data server where the data is hosted to inform them about the issue in their server\r\n- or re-try after waiting some time: usually these issues are just temporary'
'Great, thanks for checking out!'] | 2022-07-17 19:01:54 | 2022-07-18 09:20:40 | 2022-07-18 07:24:22 | NONE | null | null | null | ## Describe the bug
Cannot load LinCE dataset due to a connection error
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("lince", "ner_spaeng")
```
A notebook with this code and corresponding error can be found at https://colab.research.google.com/drive/1pgX3bNB9amuUwAVfPFm-XuMV5fEg-cD2
## Expected results
It should load the dataset
## Actual results
```python
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
<ipython-input-2-fc551ddcebef> in <module>()
1 from datasets import load_dataset
2
----> 3 dataset = load_dataset("lince", "ner_spaeng")
10 frames
/usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1682 ignore_verifications=ignore_verifications,
1683 try_from_hf_gcs=try_from_hf_gcs,
-> 1684 use_auth_token=use_auth_token,
1685 )
1686
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
703 if not downloaded_from_gcs:
704 self._download_and_prepare(
--> 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
706 )
707 # Sync info
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)
1219
1220 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1221 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
1222
1223 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable:
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
769 split_dict = SplitDict(dataset_name=self.name)
770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
772
773 # Checksums verification
/root/.cache/huggingface/modules/datasets_modules/datasets/lince/10d41747f55f0849fa84ac579ea1acfa7df49aa2015b60426bc459c111b3d589/lince.py in _split_generators(self, dl_manager)
481 def _split_generators(self, dl_manager):
482 """Returns SplitGenerators."""
--> 483 lince_dir = dl_manager.download_and_extract(f"{_LINCE_URL}/{self.config.name}.zip")
484 data_dir = os.path.join(lince_dir, self.config.data_dir)
485 return [
/usr/local/lib/python3.7/dist-packages/datasets/download/download_manager.py in download_and_extract(self, url_or_urls)
429 extracted_path(s): `str`, extracted paths of given URL(s).
430 """
--> 431 return self.extract(self.download(url_or_urls))
432
433 def get_recorded_sizes_checksums(self):
/usr/local/lib/python3.7/dist-packages/datasets/download/download_manager.py in download(self, url_or_urls)
313 num_proc=download_config.num_proc,
314 disable_tqdm=not is_progress_bar_enabled(),
--> 315 desc="Downloading data files",
316 )
317 duration = datetime.now() - start_time
/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc)
346 # Singleton
347 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 348 return function(data_struct)
349
350 disable_tqdm = disable_tqdm or not logging.is_progress_bar_enabled()
/usr/local/lib/python3.7/dist-packages/datasets/download/download_manager.py in _download(self, url_or_filename, download_config)
333 # append the relative path to the base_path
334 url_or_filename = url_or_path_join(self._base_path, url_or_filename)
--> 335 return cached_path(url_or_filename, download_config=download_config)
336
337 def iter_archive(self, path_or_buf: Union[str, io.BufferedReader]):
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
195 use_auth_token=download_config.use_auth_token,
196 ignore_url_params=download_config.ignore_url_params,
--> 197 download_desc=download_config.download_desc,
198 )
199 elif os.path.exists(url_or_filename):
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc)
531 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
532 if head_error is not None:
--> 533 raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
534 elif response is not None:
535 raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})")
ConnectionError: Couldn't reach https://ritual.uh.edu/lince/libaccess/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9/ner_spaeng.zip (ConnectTimeout(MaxRetryError("HTTPSConnectionPool(host='ritual.uh.edu', port=443): Max retries exceeded with url: /lince/libaccess/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9/ner_spaeng.zip (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7feb1c45a690>, 'Connection to ritual.uh.edu timed out. (connect timeout=100)'))")))
```
## Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4696/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4696/timeline | null | completed | false |
266 | https://api.github.com/repos/huggingface/datasets/issues/4695 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4695/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4695/comments | https://api.github.com/repos/huggingface/datasets/issues/4695/events | https://github.com/huggingface/datasets/pull/4695 | 1,307,134,701 | PR_kwDODunzps47hobQ | 4,695 | Add MANtIS dataset | {'login': 'bhavitvyamalik', 'id': 19718818, 'node_id': 'MDQ6VXNlcjE5NzE4ODE4', 'avatar_url': 'https://avatars.githubusercontent.com/u/19718818?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/bhavitvyamalik', 'html_url': 'https://github.com/bhavitvyamalik', 'followers_url': 'https://api.github.com/users/bhavitvyamalik/followers', 'following_url': 'https://api.github.com/users/bhavitvyamalik/following{/other_user}', 'gists_url': 'https://api.github.com/users/bhavitvyamalik/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/bhavitvyamalik/subscriptions', 'organizations_url': 'https://api.github.com/users/bhavitvyamalik/orgs', 'repos_url': 'https://api.github.com/users/bhavitvyamalik/repos', 'events_url': 'https://api.github.com/users/bhavitvyamalik/events{/privacy}', 'received_events_url': 'https://api.github.com/users/bhavitvyamalik/received_events', 'type': 'User', 'site_admin': False} | [] | open | false | null | [] | null | ['The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4695). All of your documentation changes will be reflected on that endpoint.'] | 2022-07-17 15:53:05 | 2022-07-17 16:00:15 | null | CONTRIBUTOR | null | true | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4695', 'html_url': 'https://github.com/huggingface/datasets/pull/4695', 'diff_url': 'https://github.com/huggingface/datasets/pull/4695.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4695.patch', 'merged_at': None} | This PR adds MANtIS dataset.
Arxiv: [https://arxiv.org/abs/1912.04639](https://arxiv.org/abs/1912.04639)
Github: [https://github.com/Guzpenha/MANtIS](https://github.com/Guzpenha/MANtIS)
README and dataset tags are WIP. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4695/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4695/timeline | null | null | true |
267 | https://api.github.com/repos/huggingface/datasets/issues/4694 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4694/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4694/comments | https://api.github.com/repos/huggingface/datasets/issues/4694/events | https://github.com/huggingface/datasets/issues/4694 | 1,306,958,380 | I_kwDODunzps5N5pos | 4,694 | Distributed data parallel training for streaming datasets | {'login': 'cyk1337', 'id': 13767887, 'node_id': 'MDQ6VXNlcjEzNzY3ODg3', 'avatar_url': 'https://avatars.githubusercontent.com/u/13767887?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/cyk1337', 'html_url': 'https://github.com/cyk1337', 'followers_url': 'https://api.github.com/users/cyk1337/followers', 'following_url': 'https://api.github.com/users/cyk1337/following{/other_user}', 'gists_url': 'https://api.github.com/users/cyk1337/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/cyk1337/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/cyk1337/subscriptions', 'organizations_url': 'https://api.github.com/users/cyk1337/orgs', 'repos_url': 'https://api.github.com/users/cyk1337/repos', 'events_url': 'https://api.github.com/users/cyk1337/events{/privacy}', 'received_events_url': 'https://api.github.com/users/cyk1337/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}] | open | false | null | [] | null | ['Hi ! According to https://huggingface.co/docs/datasets/use_with_pytorch#stream-data you can use the pytorch DataLoader with `num_workers>0` to distribute the shards across your workers (it uses `torch.utils.data.get_worker_info()` to get the worker ID and select the right subsets of shards to use)'] | 2022-07-17 01:29:43 | 2022-07-25 16:51:30 | null | NONE | null | null | null | ### Feature request
Any documentations for the the `load_dataset(streaming=True)` for (multi-node multi-GPU) DDP training?
### Motivation
Given a bunch of data files, it is expected to split them onto different GPUs. Is there a guide or documentation?
### Your contribution
Does it requires manually split on data files for each worker in `DatasetBuilder._split_generator()`? What is`IterableDatasetShard` expected to do? | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4694/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4694/timeline | null | null | false |
268 | https://api.github.com/repos/huggingface/datasets/issues/4693 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4693/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4693/comments | https://api.github.com/repos/huggingface/datasets/issues/4693/events | https://github.com/huggingface/datasets/pull/4693 | 1,306,788,322 | PR_kwDODunzps47go-F | 4,693 | update `samsum` script | {'login': 'bhavitvyamalik', 'id': 19718818, 'node_id': 'MDQ6VXNlcjE5NzE4ODE4', 'avatar_url': 'https://avatars.githubusercontent.com/u/19718818?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/bhavitvyamalik', 'html_url': 'https://github.com/bhavitvyamalik', 'followers_url': 'https://api.github.com/users/bhavitvyamalik/followers', 'following_url': 'https://api.github.com/users/bhavitvyamalik/following{/other_user}', 'gists_url': 'https://api.github.com/users/bhavitvyamalik/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/bhavitvyamalik/subscriptions', 'organizations_url': 'https://api.github.com/users/bhavitvyamalik/orgs', 'repos_url': 'https://api.github.com/users/bhavitvyamalik/repos', 'events_url': 'https://api.github.com/users/bhavitvyamalik/events{/privacy}', 'received_events_url': 'https://api.github.com/users/bhavitvyamalik/received_events', 'type': 'User', 'site_admin': False} | [] | open | false | null | [] | null | ['The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4693). All of your documentation changes will be reflected on that endpoint.'] | 2022-07-16 11:53:05 | 2022-07-19 11:45:43 | null | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4693', 'html_url': 'https://github.com/huggingface/datasets/pull/4693', 'diff_url': 'https://github.com/huggingface/datasets/pull/4693.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4693.patch', 'merged_at': None} | update `samsum` script after #4672 was merged (citation is also updated) | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4693/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4693/timeline | null | null | true |
269 | https://api.github.com/repos/huggingface/datasets/issues/4692 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4692/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4692/comments | https://api.github.com/repos/huggingface/datasets/issues/4692/events | https://github.com/huggingface/datasets/issues/4692 | 1,306,609,680 | I_kwDODunzps5N4UgQ | 4,692 | Unable to cast a column with `Image()` by using the `cast_column()` feature | {'login': 'skrishnan99', 'id': 28833916, 'node_id': 'MDQ6VXNlcjI4ODMzOTE2', 'avatar_url': 'https://avatars.githubusercontent.com/u/28833916?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/skrishnan99', 'html_url': 'https://github.com/skrishnan99', 'followers_url': 'https://api.github.com/users/skrishnan99/followers', 'following_url': 'https://api.github.com/users/skrishnan99/following{/other_user}', 'gists_url': 'https://api.github.com/users/skrishnan99/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/skrishnan99/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/skrishnan99/subscriptions', 'organizations_url': 'https://api.github.com/users/skrishnan99/orgs', 'repos_url': 'https://api.github.com/users/skrishnan99/repos', 'events_url': 'https://api.github.com/users/skrishnan99/events{/privacy}', 'received_events_url': 'https://api.github.com/users/skrishnan99/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | closed | false | null | [] | null | ['Hi, thanks for reporting! A PR (https://github.com/huggingface/datasets/pull/4614) has already been opened to address this issue.'] | 2022-07-15 22:56:03 | 2022-07-19 13:36:24 | 2022-07-19 13:36:24 | NONE | null | null | null | ## Describe the bug
A clear and concise description of what the bug is.
When I create a dataset, then add a column to the created dataset through the `dataset.add_column` feature and then try to cast a column of the dataset (this column contains image paths) with `Image()` by using the `cast_column()` feature, I get the following error - ``` TypeError: Couldn't cast array of type
string
to
{'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='string', id=None)} ```
When I try and cast the same column, but without doing the `add_column` in the previous step, it works as expected.
## Steps to reproduce the bug
```python
from datasets import Dataset, Image
data_dict = {
"img_path": ["https://picsum.photos/200/300"]
}
dataset = Dataset.from_dict(data_dict)
#NOTE Comment out this line and use cast_column and it works properly
dataset = dataset.add_column("yeet", [1])
#NOTE This line fails to execute properly if `add_column` is called before
dataset = dataset.cast_column("img_path", Image())
# #NOTE This is my current workaround. This seems to work fine with/without `add_column`. While
# # running this, make sure to comment out the `cast_column` line
# new_features = dataset.features.copy()
# new_features["img_path"] = Image()
# dataset = dataset.cast(new_features)
print(dataset)
print(dataset.features)
print(dataset[0])
```
## Expected results
A clear and concise description of the expected results.
Able to successfully use `cast_column` to cast a column containing img_paths to now be Image() features after modifying the dataset using `add_column` in a previous step
## Actual results
Specify the actual results or traceback.
```
Traceback (most recent call last):
File "/home/surya/Desktop/hf_bug_test.py", line 14, in <module>
dataset = dataset.cast_column("img_path", Image())
File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/fingerprint.py", line 458, in wrapper
out = func(self, *args, **kwargs)
File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1580, in cast_column
dataset._data = dataset._data.cast(dataset.features.arrow_schema)
File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 1487, in cast
new_tables.append(subtable.cast(subschema, *args, **kwargs))
File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 834, in cast
return InMemoryTable(table_cast(self.table, *args, **kwargs))
File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 1897, in table_cast
return cast_table_to_schema(table, schema)
File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 1880, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 1880, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 1673, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 1673, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 1846, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
string
to
{'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='string', id=None)}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Ubuntu 20.04.3 LTS
- Python version: 3.9.7
- PyArrow version: 7.0.0
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4692/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4692/timeline | null | completed | false |
270 | https://api.github.com/repos/huggingface/datasets/issues/4691 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4691/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4691/comments | https://api.github.com/repos/huggingface/datasets/issues/4691/events | https://github.com/huggingface/datasets/issues/4691 | 1,306,389,656 | I_kwDODunzps5N3eyY | 4,691 | Dataset Viewer issue for rajistics/indian_food_images | {'login': 'rajshah4', 'id': 6808012, 'node_id': 'MDQ6VXNlcjY4MDgwMTI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/6808012?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/rajshah4', 'html_url': 'https://github.com/rajshah4', 'followers_url': 'https://api.github.com/users/rajshah4/followers', 'following_url': 'https://api.github.com/users/rajshah4/following{/other_user}', 'gists_url': 'https://api.github.com/users/rajshah4/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/rajshah4/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/rajshah4/subscriptions', 'organizations_url': 'https://api.github.com/users/rajshah4/orgs', 'repos_url': 'https://api.github.com/users/rajshah4/repos', 'events_url': 'https://api.github.com/users/rajshah4/events{/privacy}', 'received_events_url': 'https://api.github.com/users/rajshah4/received_events', 'type': 'User', 'site_admin': False} | [{'id': 3470211881, 'node_id': 'LA_kwDODunzps7O1zsp', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer', 'name': 'dataset-viewer', 'color': 'E5583E', 'default': False, 'description': 'Related to the dataset viewer on huggingface.co'}] | closed | false | {'login': 'severo', 'id': 1676121, 'node_id': 'MDQ6VXNlcjE2NzYxMjE=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1676121?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/severo', 'html_url': 'https://github.com/severo', 'followers_url': 'https://api.github.com/users/severo/followers', 'following_url': 'https://api.github.com/users/severo/following{/other_user}', 'gists_url': 'https://api.github.com/users/severo/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/severo/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/severo/subscriptions', 'organizations_url': 'https://api.github.com/users/severo/orgs', 'repos_url': 'https://api.github.com/users/severo/repos', 'events_url': 'https://api.github.com/users/severo/events{/privacy}', 'received_events_url': 'https://api.github.com/users/severo/received_events', 'type': 'User', 'site_admin': False} | [{'login': 'severo', 'id': 1676121, 'node_id': 'MDQ6VXNlcjE2NzYxMjE=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1676121?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/severo', 'html_url': 'https://github.com/severo', 'followers_url': 'https://api.github.com/users/severo/followers', 'following_url': 'https://api.github.com/users/severo/following{/other_user}', 'gists_url': 'https://api.github.com/users/severo/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/severo/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/severo/subscriptions', 'organizations_url': 'https://api.github.com/users/severo/orgs', 'repos_url': 'https://api.github.com/users/severo/repos', 'events_url': 'https://api.github.com/users/severo/events{/privacy}', 'received_events_url': 'https://api.github.com/users/severo/received_events', 'type': 'User', 'site_admin': False}] | null | ['Hi, thanks for reporting. I triggered a refresh of the preview for this dataset, and it works now. I\'m not sure what occurred.\r\n<img width="1019" alt="Capture d’écran 2022-07-18 à 11 01 52" src="https://user-images.githubusercontent.com/1676121/179541327-f62ecd5e-a18a-4d91-b316-9e2ebde77a28.png">\r\n\r\n'] | 2022-07-15 19:03:15 | 2022-07-18 15:02:03 | 2022-07-18 15:02:03 | NONE | null | null | null | ### Link
https://huggingface.co/datasets/rajistics/indian_food_images/viewer/rajistics--indian_food_images/train
### Description
I have a train/test split in my dataset
<img width="410" alt="Screen Shot 2022-07-15 at 11 44 42 AM" src="https://user-images.githubusercontent.com/6808012/179293215-7b419ec3-3527-46f2-8dad-adbc5568cfa0.png">
t
The dataset viewer works for the test split (images of indian food), but does not show my train split. My guess is maybe there is some corrupt image file that is guessing this. But I have no idea.
The original dataset was pulled from here: https://www.kaggle.com/datasets/l33tc0d3r/indian-food-classification?resource=download-directory
### Owner
Yes | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4691/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4691/timeline | null | completed | false |
271 | https://api.github.com/repos/huggingface/datasets/issues/4690 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4690/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4690/comments | https://api.github.com/repos/huggingface/datasets/issues/4690/events | https://github.com/huggingface/datasets/pull/4690 | 1,306,321,975 | PR_kwDODunzps47fG6w | 4,690 | Refactor base extractors | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-15 17:47:48 | 2022-07-18 08:46:56 | 2022-07-18 08:34:49 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4690', 'html_url': 'https://github.com/huggingface/datasets/pull/4690', 'diff_url': 'https://github.com/huggingface/datasets/pull/4690.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4690.patch', 'merged_at': datetime.datetime(2022, 7, 18, 8, 34, 49)} | This PR:
- Refactors base extractors as subclasses of `BaseExtractor`:
- this is an abstract class defining the interface with:
- `is_extractable`: abstract class method
- `extract`: abstract static method
- Implements abstract `MagicNumberBaseExtractor` (as subclass of `BaseExtractor`):
- this has a default implementation of `is_extractable`
- this improves performance (reducing the number of file reads) by allowing passing already read `magic_number`
- Refactors `Extractor`:
- reads magic number from file only once
This PR deprecates:
```python
is_extractable, extractor = self.extractor.is_extractable(input_path, return_extractor=True)
self.extractor.extract(input_path, output_path, extractor=extractor)
```
and uses more Pythonic instead:
```python
extractor_format = self.extractor.infer_extractor_format(input_path)
self.extractor.extract(input_path, output_path, extractor_format)
``` | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4690/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4690/timeline | null | null | true |
272 | https://api.github.com/repos/huggingface/datasets/issues/4689 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4689/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4689/comments | https://api.github.com/repos/huggingface/datasets/issues/4689/events | https://github.com/huggingface/datasets/pull/4689 | 1,306,230,203 | PR_kwDODunzps47eyw5 | 4,689 | Test extractors for all compression formats | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-15 16:29:55 | 2022-07-15 17:47:02 | 2022-07-15 17:35:24 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4689', 'html_url': 'https://github.com/huggingface/datasets/pull/4689', 'diff_url': 'https://github.com/huggingface/datasets/pull/4689.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4689.patch', 'merged_at': datetime.datetime(2022, 7, 15, 17, 35, 24)} | This PR:
- Adds all compression formats to `test_extractor`
- Tests each base extractor for all compression formats
Note that all compression formats are tested except "rar". | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4689/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4689/timeline | null | null | true |
273 | https://api.github.com/repos/huggingface/datasets/issues/4688 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4688/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4688/comments | https://api.github.com/repos/huggingface/datasets/issues/4688/events | https://github.com/huggingface/datasets/pull/4688 | 1,306,100,488 | PR_kwDODunzps47eW6C | 4,688 | Skip test_extractor only for zstd param if zstandard not installed | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-15 14:23:47 | 2022-07-15 15:27:53 | 2022-07-15 15:15:24 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4688', 'html_url': 'https://github.com/huggingface/datasets/pull/4688', 'diff_url': 'https://github.com/huggingface/datasets/pull/4688.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4688.patch', 'merged_at': datetime.datetime(2022, 7, 15, 15, 15, 24)} | Currently, if `zstandard` is not installed, `test_extractor` is skipped for all compression format parameters.
This PR fixes `test_extractor` so that if `zstandard` is not installed, `test_extractor` is skipped only for the `zstd` compression parameter, that is, it is not skipped for all the other compression parameters (`gzip`, `xz`,...). | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4688/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4688/timeline | null | null | true |
274 | https://api.github.com/repos/huggingface/datasets/issues/4687 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4687/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4687/comments | https://api.github.com/repos/huggingface/datasets/issues/4687/events | https://github.com/huggingface/datasets/pull/4687 | 1,306,021,415 | PR_kwDODunzps47eF_E | 4,687 | Trigger CI also on push to main | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-15 13:11:29 | 2022-07-15 13:47:21 | 2022-07-15 13:35:23 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4687', 'html_url': 'https://github.com/huggingface/datasets/pull/4687', 'diff_url': 'https://github.com/huggingface/datasets/pull/4687.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4687.patch', 'merged_at': datetime.datetime(2022, 7, 15, 13, 35, 23)} | Currently, new CI (on GitHub Actions) is only triggered on pull requests branches when the base branch is main.
This PR also triggers the CI when a PR is merged to main branch. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4687/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4687/timeline | null | null | true |
275 | https://api.github.com/repos/huggingface/datasets/issues/4686 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4686/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4686/comments | https://api.github.com/repos/huggingface/datasets/issues/4686/events | https://github.com/huggingface/datasets/pull/4686 | 1,305,974,924 | PR_kwDODunzps47d8Jf | 4,686 | Align logging with Transformers (again) | {'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/followers', 'following_url': 'https://api.github.com/users/mariosasko/following{/other_user}', 'gists_url': 'https://api.github.com/users/mariosasko/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mariosasko/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mariosasko/subscriptions', 'organizations_url': 'https://api.github.com/users/mariosasko/orgs', 'repos_url': 'https://api.github.com/users/mariosasko/repos', 'events_url': 'https://api.github.com/users/mariosasko/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mariosasko/received_events', 'type': 'User', 'site_admin': False} | [] | open | false | null | [] | null | ['The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4686). All of your documentation changes will be reflected on that endpoint.'
"I wasn't aware of https://github.com/huggingface/datasets/pull/1845 before opening this PR. This issue seems much more complex now ..."] | 2022-07-15 12:24:29 | 2022-07-15 15:27:43 | null | CONTRIBUTOR | null | true | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4686', 'html_url': 'https://github.com/huggingface/datasets/pull/4686', 'diff_url': 'https://github.com/huggingface/datasets/pull/4686.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4686.patch', 'merged_at': None} | Fix #2832 | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4686/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4686/timeline | null | null | true |
276 | https://api.github.com/repos/huggingface/datasets/issues/4685 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4685/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4685/comments | https://api.github.com/repos/huggingface/datasets/issues/4685/events | https://github.com/huggingface/datasets/pull/4685 | 1,305,861,708 | PR_kwDODunzps47dju8 | 4,685 | Fix mock fsspec | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-15 10:23:12 | 2022-07-15 13:05:03 | 2022-07-15 12:52:40 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4685', 'html_url': 'https://github.com/huggingface/datasets/pull/4685', 'diff_url': 'https://github.com/huggingface/datasets/pull/4685.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4685.patch', 'merged_at': datetime.datetime(2022, 7, 15, 12, 52, 40)} | This PR:
- Removes an unused method from `DummyTestFS`
- Refactors `mock_fsspec` to make it simpler | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4685/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4685/timeline | null | null | true |
277 | https://api.github.com/repos/huggingface/datasets/issues/4684 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4684/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4684/comments | https://api.github.com/repos/huggingface/datasets/issues/4684/events | https://github.com/huggingface/datasets/issues/4684 | 1,305,554,654 | I_kwDODunzps5N0S7e | 4,684 | How to assign new values to Dataset? | {'login': 'beyondguo', 'id': 37113676, 'node_id': 'MDQ6VXNlcjM3MTEzNjc2', 'avatar_url': 'https://avatars.githubusercontent.com/u/37113676?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/beyondguo', 'html_url': 'https://github.com/beyondguo', 'followers_url': 'https://api.github.com/users/beyondguo/followers', 'following_url': 'https://api.github.com/users/beyondguo/following{/other_user}', 'gists_url': 'https://api.github.com/users/beyondguo/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/beyondguo/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/beyondguo/subscriptions', 'organizations_url': 'https://api.github.com/users/beyondguo/orgs', 'repos_url': 'https://api.github.com/users/beyondguo/repos', 'events_url': 'https://api.github.com/users/beyondguo/events{/privacy}', 'received_events_url': 'https://api.github.com/users/beyondguo/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}] | open | false | null | [] | null | ['Hi! One option is use `map` with a function that overwrites the labels (`dset = dset.map(lamba _: {"label": 0}, features=dset.features`)). Or you can use the `remove_column` + `add_column` combination (`dset = dset.remove_columns("label").add_column("label", [0]*len(data)).cast(dset.features)`, but note that this approach creates an in-memory table for the added column instead of writing to disk, which could be problematic for large datasets.'] | 2022-07-15 04:17:57 | 2022-07-15 16:18:21 | null | NONE | null | null | null | ![image](https://user-images.githubusercontent.com/37113676/179149159-bbbda0c8-a661-403c-87ed-dc2b4219cd68.png)
Hi, if I want to change some values of the dataset, or add new columns to it, how can I do it?
For example, I want to change all the labels of the SST2 dataset to `0`:
```python
from datasets import load_dataset
data = load_dataset('glue','sst2')
data['train']['label'] = [0]*len(data)
```
I will get the error:
```
TypeError: 'Dataset' object does not support item assignment
``` | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4684/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4684/timeline | null | null | false |
278 | https://api.github.com/repos/huggingface/datasets/issues/4683 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4683/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4683/comments | https://api.github.com/repos/huggingface/datasets/issues/4683/events | https://github.com/huggingface/datasets/pull/4683 | 1,305,443,253 | PR_kwDODunzps47cLkm | 4,683 | Update create dataset card docs | {'login': 'stevhliu', 'id': 59462357, 'node_id': 'MDQ6VXNlcjU5NDYyMzU3', 'avatar_url': 'https://avatars.githubusercontent.com/u/59462357?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/stevhliu', 'html_url': 'https://github.com/stevhliu', 'followers_url': 'https://api.github.com/users/stevhliu/followers', 'following_url': 'https://api.github.com/users/stevhliu/following{/other_user}', 'gists_url': 'https://api.github.com/users/stevhliu/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/stevhliu/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/stevhliu/subscriptions', 'organizations_url': 'https://api.github.com/users/stevhliu/orgs', 'repos_url': 'https://api.github.com/users/stevhliu/repos', 'events_url': 'https://api.github.com/users/stevhliu/events{/privacy}', 'received_events_url': 'https://api.github.com/users/stevhliu/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892861, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODYx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/documentation', 'name': 'documentation', 'color': '0075ca', 'default': True, 'description': 'Improvements or additions to documentation'}] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-15 00:41:29 | 2022-07-18 17:26:00 | 2022-07-18 13:24:10 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4683', 'html_url': 'https://github.com/huggingface/datasets/pull/4683', 'diff_url': 'https://github.com/huggingface/datasets/pull/4683.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4683.patch', 'merged_at': datetime.datetime(2022, 7, 18, 13, 24, 10)} | This PR proposes removing the [online dataset card creator](https://huggingface.co/datasets/card-creator/) in favor of simply copy/pasting a template and using the [Datasets Tagger app](https://huggingface.co/spaces/huggingface/datasets-tagging) to generate the tags. The Tagger app provides more guidance by showing all possible values a user can select in the dropdown menus, whereas the online dataset card creator doesn't, which can make it difficult to know what tag values to input.
Let me know what you think! :) | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4683/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4683/timeline | null | null | true |
279 | https://api.github.com/repos/huggingface/datasets/issues/4682 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4682/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4682/comments | https://api.github.com/repos/huggingface/datasets/issues/4682/events | https://github.com/huggingface/datasets/issues/4682 | 1,304,788,215 | I_kwDODunzps5NxXz3 | 4,682 | weird issue/bug with columns (dataset iterable/stream mode) | {'login': 'eunseojo', 'id': 12104720, 'node_id': 'MDQ6VXNlcjEyMTA0NzIw', 'avatar_url': 'https://avatars.githubusercontent.com/u/12104720?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/eunseojo', 'html_url': 'https://github.com/eunseojo', 'followers_url': 'https://api.github.com/users/eunseojo/followers', 'following_url': 'https://api.github.com/users/eunseojo/following{/other_user}', 'gists_url': 'https://api.github.com/users/eunseojo/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/eunseojo/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/eunseojo/subscriptions', 'organizations_url': 'https://api.github.com/users/eunseojo/orgs', 'repos_url': 'https://api.github.com/users/eunseojo/repos', 'events_url': 'https://api.github.com/users/eunseojo/events{/privacy}', 'received_events_url': 'https://api.github.com/users/eunseojo/received_events', 'type': 'User', 'site_admin': False} | [] | open | false | null | [] | null | [] | 2022-07-14 13:26:47 | 2022-07-14 13:26:47 | null | NONE | null | null | null | I have a dataset online (CloverSearch/cc-news-mutlilingual) that has a bunch of columns, two of which are "score_title_maintext" and "score_title_description". the original files are jsonl formatted. I was trying to iterate through via streaming mode and grab all "score_title_description" values, but I kept getting key not found after a certain point of iteration. I found that some json objects in the file don't have "score_title_description". And in SOME cases, this returns a NONE and in others it just gets a key error. Why is there an inconsistency here and how can I fix it? | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4682/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4682/timeline | null | null | false |
280 | https://api.github.com/repos/huggingface/datasets/issues/4681 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4681/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4681/comments | https://api.github.com/repos/huggingface/datasets/issues/4681/events | https://github.com/huggingface/datasets/issues/4681 | 1,304,617,484 | I_kwDODunzps5NwuIM | 4,681 | IndexError when loading ImageFolder | {'login': 'johko', 'id': 2843485, 'node_id': 'MDQ6VXNlcjI4NDM0ODU=', 'avatar_url': 'https://avatars.githubusercontent.com/u/2843485?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/johko', 'html_url': 'https://github.com/johko', 'followers_url': 'https://api.github.com/users/johko/followers', 'following_url': 'https://api.github.com/users/johko/following{/other_user}', 'gists_url': 'https://api.github.com/users/johko/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/johko/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/johko/subscriptions', 'organizations_url': 'https://api.github.com/users/johko/orgs', 'repos_url': 'https://api.github.com/users/johko/repos', 'events_url': 'https://api.github.com/users/johko/events{/privacy}', 'received_events_url': 'https://api.github.com/users/johko/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | closed | false | {'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/followers', 'following_url': 'https://api.github.com/users/mariosasko/following{/other_user}', 'gists_url': 'https://api.github.com/users/mariosasko/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mariosasko/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mariosasko/subscriptions', 'organizations_url': 'https://api.github.com/users/mariosasko/orgs', 'repos_url': 'https://api.github.com/users/mariosasko/repos', 'events_url': 'https://api.github.com/users/mariosasko/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mariosasko/received_events', 'type': 'User', 'site_admin': False} | [{'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/followers', 'following_url': 'https://api.github.com/users/mariosasko/following{/other_user}', 'gists_url': 'https://api.github.com/users/mariosasko/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mariosasko/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mariosasko/subscriptions', 'organizations_url': 'https://api.github.com/users/mariosasko/orgs', 'repos_url': 'https://api.github.com/users/mariosasko/repos', 'events_url': 'https://api.github.com/users/mariosasko/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mariosasko/received_events', 'type': 'User', 'site_admin': False}] | null | ['Hi, thanks for reporting! If there are no examples in ImageFolder, the `label` column is of type `ClassLabel(names=[])`, which leads to an error in [this line](https://github.com/huggingface/datasets/blob/c15b391942764152f6060b59921b09cacc5f22a6/src/datasets/arrow_writer.py#L387) as `asdict(info)` calls `Features({..., "label": {\'num_classes\': 0, \'names\': [], \'id\': None, \'_type\': \'ClassLabel\'}})`, which then calls `require_decoding` [here](https://github.com/huggingface/datasets/blob/c15b391942764152f6060b59921b09cacc5f22a6/src/datasets/features/features.py#L1516) on the dict value it does not expect.\r\n\r\nI see two ways to fix this:\r\n* custom `asdict` where `dict_factory` is also applied on the `dict` object itself besides dataclasses (the built-in implementation calls `type(dict_obj)` - this means we also need to fix `Features.to_dict` btw) \r\n* implement `DatasetInfo.to_dict` (though adding `to_dict` to a data class is a bit weird IMO)\r\n\r\n@lhoestq Which one of these approaches do you like more?\r\n'
'Small pref for the first option, it feels weird to know that `Features()` can be called with a dictionary of types defined as dictionaries instead of type instances.'] | 2022-07-14 10:57:55 | 2022-07-25 12:37:54 | 2022-07-25 12:37:54 | NONE | null | null | null | ## Describe the bug
Loading an image dataset with `imagefolder` throws `IndexError: list index out of range` when the given folder contains a non-image file (like a csv).
## Steps to reproduce the bug
Put a csv file in a folder with images and load it:
```python
import datasets
datasets.load_dataset("imagefolder", data_dir=path/to/folder)
```
## Expected results
I would expect a better error message, like `Unsupported file` or even the dataset loader just ignoring every file that is not an image in that case.
## Actual results
Here is the whole traceback:
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-5.11.0-051100-generic-x86_64-with-glibc2.27
- Python version: 3.9.9
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4681/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4681/timeline | null | completed | false |
281 | https://api.github.com/repos/huggingface/datasets/issues/4680 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4680/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4680/comments | https://api.github.com/repos/huggingface/datasets/issues/4680/events | https://github.com/huggingface/datasets/issues/4680 | 1,304,534,770 | I_kwDODunzps5NwZ7y | 4,680 | Dataset Viewer issue for codeparrot/xlcost-text-to-code | {'login': 'loubnabnl', 'id': 44069155, 'node_id': 'MDQ6VXNlcjQ0MDY5MTU1', 'avatar_url': 'https://avatars.githubusercontent.com/u/44069155?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/loubnabnl', 'html_url': 'https://github.com/loubnabnl', 'followers_url': 'https://api.github.com/users/loubnabnl/followers', 'following_url': 'https://api.github.com/users/loubnabnl/following{/other_user}', 'gists_url': 'https://api.github.com/users/loubnabnl/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/loubnabnl/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/loubnabnl/subscriptions', 'organizations_url': 'https://api.github.com/users/loubnabnl/orgs', 'repos_url': 'https://api.github.com/users/loubnabnl/repos', 'events_url': 'https://api.github.com/users/loubnabnl/events{/privacy}', 'received_events_url': 'https://api.github.com/users/loubnabnl/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['There seems to be an issue with the `C++-snippet-level` config:\r\n\r\n```python\r\n>>> from datasets import get_dataset_split_names\r\n>>> get_dataset_split_names("codeparrot/xlcost-text-to-code", "C++-snippet-level")\r\nTraceback (most recent call last):\r\n File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 352, in get_dataset_config_info\r\n info.splits = {\r\nTypeError: \'NoneType\' object is not iterable\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File "<stdin>", line 1, in <module>\r\n File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 404, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 359, in get_dataset_config_info\r\n raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```\r\n\r\nI remove the dataset-viewer tag since it\'s not directly related.\r\n\r\nPinging @huggingface/datasets '
"Thanks I found that this subset wasn't properly defined the the config, I fixed it. Now I can see the subsets but I get this error for the viewer\r\n````\r\nStatus code: 400\r\nException: Status400Error\r\nMessage: The split cache is empty.\r\n```"
'Yes, the cache is being refreshed, hopefully, it will work in some minutes for all the splits. Some are already here:\r\n\r\nhttps://huggingface.co/datasets/codeparrot/xlcost-text-to-code/viewer/Python-snippet-level/train\r\n\r\n<img width="1533" alt="Capture d’écran 2022-07-18 à 12 04 06" src="https://user-images.githubusercontent.com/1676121/179553933-64d874fa-ada9-4b82-900e-082619523c20.png">\r\n'
'I think all the splits are working as expected now'
'Perfect, thank you!'] | 2022-07-14 09:45:50 | 2022-07-18 16:37:00 | 2022-07-18 16:04:36 | NONE | null | null | null | ### Link
https://huggingface.co/datasets/codeparrot/xlcost-text-to-code
### Description
Error
```
Server Error
Status code: 400
Exception: TypeError
Message: 'NoneType' object is not iterable
```
Before I did a minor change in the dataset script (removing some comments), the viewer was working but not properely, it wasn't showing the dataset subsets. But the data can be loaded successfully.
Thanks!
### Owner
Yes | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4680/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4680/timeline | null | completed | false |
282 | https://api.github.com/repos/huggingface/datasets/issues/4679 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4679/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4679/comments | https://api.github.com/repos/huggingface/datasets/issues/4679/events | https://github.com/huggingface/datasets/pull/4679 | 1,303,980,648 | PR_kwDODunzps47XX67 | 4,679 | Added method to remove excess nesting in a DatasetDict | {'login': 'CakeCrusher', 'id': 37946988, 'node_id': 'MDQ6VXNlcjM3OTQ2OTg4', 'avatar_url': 'https://avatars.githubusercontent.com/u/37946988?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/CakeCrusher', 'html_url': 'https://github.com/CakeCrusher', 'followers_url': 'https://api.github.com/users/CakeCrusher/followers', 'following_url': 'https://api.github.com/users/CakeCrusher/following{/other_user}', 'gists_url': 'https://api.github.com/users/CakeCrusher/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/CakeCrusher/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/CakeCrusher/subscriptions', 'organizations_url': 'https://api.github.com/users/CakeCrusher/orgs', 'repos_url': 'https://api.github.com/users/CakeCrusher/repos', 'events_url': 'https://api.github.com/users/CakeCrusher/events{/privacy}', 'received_events_url': 'https://api.github.com/users/CakeCrusher/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['Hi ! I think the issue you linked is closed and suggests to use `remove_columns`.\r\n\r\nMoreover if you end up with a dataset with an unnecessarily nested data, please modify your processing functions to not output nested data, or use `map(..., batched=True)` if you function take batches as input'
"Hi @lhoestq , you are right about the issues this pull has steered beyond that issue. I created this [colab notebook](https://colab.research.google.com/drive/16aLu6QrDSV_aUYRdpufl5E4iS08qkUGj?usp=sharing) to present the error. I tried using batch and that won't resolve it either. I'm looking into that error right now."
'I think you just need to pass one example at a time to your tokenizer, this way you don\'t end up with nested data:\r\n```python\r\n\r\ndef preprocessFunction(row):\r\n collatedContext = tokenizer.eos_token.join([row["context"+str(i+1)] for i in range(int(AMT_OF_CONTEXT))])\r\n response = row["response"]\r\n tokenizedContext = tokenizer(\r\n collatedContext, max_length=max_context_length, truncation=True # don\'t pass as a list here\r\n )\r\n with tokenizer.as_target_tokenizer():\r\n tokenized_response = tokenizer(\r\n response, max_length=max_response_length, truncation=True # don\'t pass a a list here\r\n )\r\n tokenizedContext["labels"] = tokenized_response["input_ids"]\r\n return tokenizedContext\r\n```'
'Yes that is correct, the purpose of this pull is to advise of a more general solution like with `def remove_excess_nesting(self)` or maybe automate the solution (stas00 advised not to automate it as it could "not be backwards compatible").'
"I'm not sure I understand how having `remove_excess_nesting` would make more sense than just fixing the preprocessFunction to simply not return nested samples, can you elaborate ?"
'Figuring out the issue can be a bit difficult to figure out. Only until I added batch does it make a little more sense with the error\r\n\r\n> sequence item 0: expected str instance, list found\r\n\r\nbut batch was never intended.\r\n\r\nWhen you run the colab you will notice that only until collating do you learn there is this error. So i figured it would be better to address it during at the `DatasetDict` level.\r\nI think it would be ideal if the user could be notified at the preprocess function.'
"I'm not arguing that `remove_excess_nesting` is the right solution but what I aim to address is dealing with unnecessary nesting as early as possible."
"> When you run the colab you will notice that only until collating do you learn there is this error.\r\n\r\nI think users can just check the `dataset.features` and they would notice that the data are nested\r\n```python\r\n{\r\n 'input_ids': Sequence(Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), length=-1, id=None)\r\n ...\r\n}\r\n```\r\n\r\nSometime nested data are intentional, so you can't know in advance if it's a user's mistake or something planned."
'Yes, I understand, it could be intentional and only the collator has problems with it. So, it is not worth handling it any differently in any other non-erroneous data. \r\n\r\nThat being said do you think there is any use for the `remove_excess_nesting` method? Or maybe it should be applied in a different way? If not feel free to close this PR. '
"I think users can write it and use `map` themselves if needed, it is pretty straightforward to implement.\r\n\r\nI'm closing this PR if you don't mind, and thank you for the discussion :)"
'No problem @lhoestq , thanks for walking me through it.'] | 2022-07-13 21:49:37 | 2022-07-21 15:55:26 | 2022-07-21 10:55:02 | NONE | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4679', 'html_url': 'https://github.com/huggingface/datasets/pull/4679', 'diff_url': 'https://github.com/huggingface/datasets/pull/4679.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4679.patch', 'merged_at': None} | Added the ability for a DatasetDict to remove additional nested layers within its features to avoid conflicts when collating. It is meant to accompany [this PR](https://github.com/huggingface/transformers/pull/18119) to resolve the same issue [#15505](https://github.com/huggingface/transformers/issues/15505).
@stas00 @lhoestq | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4679/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4679/timeline | null | null | true |
283 | https://api.github.com/repos/huggingface/datasets/issues/4678 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4678/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4678/comments | https://api.github.com/repos/huggingface/datasets/issues/4678/events | https://github.com/huggingface/datasets/issues/4678 | 1,303,741,432 | I_kwDODunzps5NtYP4 | 4,678 | Cant pass streaming dataset to dataloader after take() | {'login': 'zankner', 'id': 39166683, 'node_id': 'MDQ6VXNlcjM5MTY2Njgz', 'avatar_url': 'https://avatars.githubusercontent.com/u/39166683?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/zankner', 'html_url': 'https://github.com/zankner', 'followers_url': 'https://api.github.com/users/zankner/followers', 'following_url': 'https://api.github.com/users/zankner/following{/other_user}', 'gists_url': 'https://api.github.com/users/zankner/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/zankner/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/zankner/subscriptions', 'organizations_url': 'https://api.github.com/users/zankner/orgs', 'repos_url': 'https://api.github.com/users/zankner/repos', 'events_url': 'https://api.github.com/users/zankner/events{/privacy}', 'received_events_url': 'https://api.github.com/users/zankner/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | open | false | null | [] | null | ['Hi! Calling `take` on an iterable/streamable dataset makes it not possible to shard the dataset, which in turn disables multi-process loading (attempts to split the workload over the shards), so to go past this limitation, you can either use single-process loading in `DataLoader` (`num_workers=None`) or fetch the first `50_000/batch_size` batches in the loop.'] | 2022-07-13 17:34:18 | 2022-07-14 13:07:21 | null | NONE | null | null | null | ## Describe the bug
I am trying to pass a streaming version of c4 to a dataloader, but it can't be passed after I call `dataset.take(n)`. Some functions such as `shuffle()` can be applied without breaking the dataloader but not take.
## Steps to reproduce the bug
```python
import datasets
import torch
dset = datasets.load_dataset(path='c4', name='en', split="train", streaming=True)
dset = dset.take(50_000)
dset = dset.with_format("torch")
num_workers = 8
batch_size = 512
loader = torch.utils.data.DataLoader(dataset=dset,
batch_size=batch_size,
num_workers=num_workers)
for batch in loader:
...
```
## Expected results
No error thrown when iterating over the dataloader
## Actual results
Original Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch
data.append(next(self.dataset_iter))
File "/root/.local/lib/python3.9/site-packages/datasets/formatting/dataset_wrappers/torch_iterable_dataset.py", line 48, in __iter__
for key, example in self._iter_shard(shard_idx):
File "/root/.local/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 586, in _iter_shard
yield from ex_iterable.shard_data_sources(shard_idx)
File "/root/.local/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 60, in shard_data_sources
raise NotImplementedError(f"{type(self)} doesn't implement shard_data_sources yet")
NotImplementedError: <class 'datasets.iterable_dataset.TakeExamplesIterable'> doesn't implement shard_data_sources yet
## Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.4.0-120-generic-x86_64-with-glibc2.31
- Python version: 3.9.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4678/reactions', 'total_count': 1, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 1} | https://api.github.com/repos/huggingface/datasets/issues/4678/timeline | null | null | false |
284 | https://api.github.com/repos/huggingface/datasets/issues/4677 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4677/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4677/comments | https://api.github.com/repos/huggingface/datasets/issues/4677/events | https://github.com/huggingface/datasets/issues/4677 | 1,302,258,440 | I_kwDODunzps5NnuMI | 4,677 | Random 400 Client Error when pushing dataset | {'login': 'msis', 'id': 577139, 'node_id': 'MDQ6VXNlcjU3NzEzOQ==', 'avatar_url': 'https://avatars.githubusercontent.com/u/577139?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/msis', 'html_url': 'https://github.com/msis', 'followers_url': 'https://api.github.com/users/msis/followers', 'following_url': 'https://api.github.com/users/msis/following{/other_user}', 'gists_url': 'https://api.github.com/users/msis/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/msis/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/msis/subscriptions', 'organizations_url': 'https://api.github.com/users/msis/orgs', 'repos_url': 'https://api.github.com/users/msis/repos', 'events_url': 'https://api.github.com/users/msis/events{/privacy}', 'received_events_url': 'https://api.github.com/users/msis/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | open | false | null | [] | null | [] | 2022-07-12 15:56:44 | 2022-07-12 15:56:44 | null | NONE | null | null | null | ## Describe the bug
When pushing a dataset, the client errors randomly with `Bad Request for url:...`.
At the next call, a new parquet file is created for each shard.
The client may fail at any random shard.
## Steps to reproduce the bug
```python
dataset.push_to_hub("ORG/DATASET", private=True, branch="main")
```
## Expected results
Push all the dataset to the Hub with no duplicates.
If it fails, it should retry or fail, but continue from the last failed shard.
## Actual results
```
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
testing.ipynb Cell 29 in <cell line: 1>()
----> [1](testing.ipynb?line=0) dataset.push_to_hub("ORG/DATASET", private=True, branch="main")
File ~/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py:4297, in Dataset.push_to_hub(self, repo_id, split, private, token, branch, max_shard_size, shard_size, embed_external_files)
4291 warnings.warn(
4292 "'shard_size' was renamed to 'max_shard_size' in version 2.1.1 and will be removed in 2.4.0.",
4293 FutureWarning,
4294 )
4295 max_shard_size = shard_size
-> 4297 repo_id, split, uploaded_size, dataset_nbytes, repo_files, deleted_size = self._push_parquet_shards_to_hub(
4298 repo_id=repo_id,
4299 split=split,
4300 private=private,
4301 token=token,
4302 branch=branch,
4303 max_shard_size=max_shard_size,
4304 embed_external_files=embed_external_files,
4305 )
4306 organization, dataset_name = repo_id.split("/")
4307 info_to_dump = self.info.copy()
File ~/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py:4195, in Dataset._push_parquet_shards_to_hub(self, repo_id, split, private, token, branch, max_shard_size, embed_external_files)
4193 shard.to_parquet(buffer)
4194 uploaded_size += buffer.tell()
-> 4195 _retry(
4196 api.upload_file,
4197 func_kwargs=dict(
4198 path_or_fileobj=buffer.getvalue(),
4199 path_in_repo=shard_path_in_repo,
4200 repo_id=repo_id,
4201 token=token,
4202 repo_type="dataset",
4203 revision=branch,
4204 identical_ok=False,
4205 ),
4206 exceptions=HTTPError,
4207 status_codes=[504],
4208 base_wait_time=2.0,
4209 max_retries=5,
4210 max_wait_time=20.0,
4211 )
4212 shards_path_in_repo.append(shard_path_in_repo)
4214 # Cleanup to remove unused files
File ~/.local/lib/python3.9/site-packages/datasets/utils/file_utils.py:284, in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)
282 except exceptions as err:
283 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):
--> 284 raise err
285 else:
286 sleep_time = min(max_wait_time, base_wait_time * 2**retry) # Exponential backoff
File ~/.local/lib/python3.9/site-packages/datasets/utils/file_utils.py:281, in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)
279 while True:
280 try:
--> 281 return func(*func_args, **func_kwargs)
282 except exceptions as err:
283 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):
File ~/.local/lib/python3.9/site-packages/huggingface_hub/hf_api.py:1967, in HfApi.upload_file(self, path_or_fileobj, path_in_repo, repo_id, token, repo_type, revision, identical_ok, commit_message, commit_description, create_pr)
1957 commit_message = (
1958 commit_message
1959 if commit_message is not None
1960 else f"Upload {path_in_repo} with huggingface_hub"
1961 )
1962 operation = CommitOperationAdd(
1963 path_or_fileobj=path_or_fileobj,
1964 path_in_repo=path_in_repo,
1965 )
-> 1967 pr_url = self.create_commit(
1968 repo_id=repo_id,
1969 repo_type=repo_type,
1970 operations=[operation],
1971 commit_message=commit_message,
1972 commit_description=commit_description,
1973 token=token,
1974 revision=revision,
1975 create_pr=create_pr,
1976 )
1977 if pr_url is not None:
1978 re_match = re.match(REGEX_DISCUSSION_URL, pr_url)
File ~/.local/lib/python3.9/site-packages/huggingface_hub/hf_api.py:1844, in HfApi.create_commit(self, repo_id, operations, commit_message, commit_description, token, repo_type, revision, create_pr, num_threads)
1836 commit_url = f"{self.endpoint}/api/{repo_type}s/{repo_id}/commit/{revision}"
1838 commit_resp = requests.post(
1839 url=commit_url,
1840 headers={"Authorization": f"Bearer {token}"},
1841 json=commit_payload,
1842 params={"create_pr": 1} if create_pr else None,
1843 )
-> 1844 _raise_for_status(commit_resp)
1845 return commit_resp.json().get("pullRequestUrl", None)
File ~/.local/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py:84, in _raise_for_status(request)
76 if request.status_code == 401:
77 # The repo was not found and the user is not Authenticated
78 raise RepositoryNotFoundError(
79 f"401 Client Error: Repository Not Found for url: {request.url}. If the"
80 " repo is private, make sure you are authenticated. (Request ID:"
81 f" {request_id})"
82 )
---> 84 _raise_with_request_id(request)
File ~/.local/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py:95, in _raise_with_request_id(request)
92 if request_id is not None and len(e.args) > 0 and isinstance(e.args[0], str):
93 e.args = (e.args[0] + f" (Request ID: {request_id})",) + e.args[1:]
---> 95 raise e
File ~/.local/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py:90, in _raise_with_request_id(request)
88 request_id = request.headers.get("X-Request-Id")
89 try:
---> 90 request.raise_for_status()
91 except Exception as e:
92 if request_id is not None and len(e.args) > 0 and isinstance(e.args[0], str):
File ~/.local/lib/python3.9/site-packages/requests/models.py:1021, in Response.raise_for_status(self)
1016 http_error_msg = (
1017 f"{self.status_code} Server Error: {reason} for url: {self.url}"
1018 )
1020 if http_error_msg:
-> 1021 raise HTTPError(http_error_msg, response=self)
HTTPError: 400 Client Error: Bad Request for url: https://huggingface.co/api/datasets/ORG/DATASET/commit/main (Request ID: a_F0IQAHJdxGKVRYyu1cF)
```
## Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.13.0-1025-aws-x86_64-with-glibc2.31
- Python version: 3.9.4
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4677/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4677/timeline | null | null | false |
285 | https://api.github.com/repos/huggingface/datasets/issues/4676 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4676/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4676/comments | https://api.github.com/repos/huggingface/datasets/issues/4676/events | https://github.com/huggingface/datasets/issues/4676 | 1,302,202,028 | I_kwDODunzps5Nngas | 4,676 | Dataset.map gets stuck on _cast_to_python_objects | {'login': 'srobertjames', 'id': 662612, 'node_id': 'MDQ6VXNlcjY2MjYxMg==', 'avatar_url': 'https://avatars.githubusercontent.com/u/662612?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/srobertjames', 'html_url': 'https://github.com/srobertjames', 'followers_url': 'https://api.github.com/users/srobertjames/followers', 'following_url': 'https://api.github.com/users/srobertjames/following{/other_user}', 'gists_url': 'https://api.github.com/users/srobertjames/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/srobertjames/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/srobertjames/subscriptions', 'organizations_url': 'https://api.github.com/users/srobertjames/orgs', 'repos_url': 'https://api.github.com/users/srobertjames/repos', 'events_url': 'https://api.github.com/users/srobertjames/events{/privacy}', 'received_events_url': 'https://api.github.com/users/srobertjames/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}
{'id': 1935892877, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODc3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue', 'name': 'good first issue', 'color': '7057ff', 'default': True, 'description': 'Good for newcomers'}] | open | false | null | [] | null | ['Are you able to reproduce this? My example is small enough that it should be easy to try.'
'Hi! Thanks for reporting and providing a reproducible example. Indeed, by default, `datasets` performs an expensive cast on the values returned by `map` to convert them to one of the types supported by PyArrow (the underlying storage format used by `datasets`). This cast is not needed on NumPy arrays as PyArrow supports them natively, so one way to make this transform faster is to add `return_tensors="np"` to the tokenizer call. \r\n\r\nI think we should mention this in the docs (cc @stevhliu)'
"I tested this tokenize function and indeed noticed a casting. However it seems to only concerns the `offset_mapping` field, which contains a list of tuples, that is converted to a list of lists. Since `pyarrow` also supports tuples, we actually don't need to convert the tuples to lists. \r\n\r\nI think this can be changed here: \r\n\r\nhttps://github.com/huggingface/datasets/blob/ede72d3f9796339701ec59899c7c31d2427046fb/src/datasets/features/features.py#L382-L383\r\n\r\n```diff\r\n- if isinstance(obj, list): \r\n+ if isinstance(obj, (list, tuple)): \r\n```\r\n\r\nand here: \r\n\r\nhttps://github.com/huggingface/datasets/blob/ede72d3f9796339701ec59899c7c31d2427046fb/src/datasets/features/features.py#L386-L387\r\n\r\n```diff\r\n- return obj if isinstance(obj, list) else [], isinstance(obj, tuple)\r\n+ return obj, False\r\n```\r\n\r\n@srobertjames can you try applying these changes and let us know if it helps ? If so, feel free to open a Pull Request to contribute this improvement if you want :)"
'Wow, adding `return_tensors="np"` sped up my example by a **factor 17x** of and completely eliminated the casting! I\'d recommend not only to document it, but to make that the default.\r\n\r\nThe code at https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb does not specify `return_tensors="np"` but yet avoids the casting penalty. How does it do that? (The ntbk seems to do `return_overflowing_tokens=True, return_offsets_mapping=True,`).\r\n\r\nAlso, surprisingly enough, using `return_tensors="pt"` (which is my eventual application) yields this error:\r\n```\r\nTypeError: Provided `function` which is applied to all elements of table returns a `dict` of types \r\n[<class \'torch.Tensor\'>, <class \'torch.Tensor\'>, <class \'torch.Tensor\'>, <class \'torch.Tensor\'>]. \r\nWhen using `batched=True`, make sure provided `function` returns a `dict` of types like \r\n`(<class \'list\'>, <class \'numpy.ndarray\'>)`.\r\n```'
'Setting the output to `"np"` makes the whole pipeline fast because it moves the data buffers from rust to python to arrow using zero-copy, and also because it does eliminate the casting completely ;)\r\n\r\nHave you had a chance to try eliminating the tuple casting using the trick above ?'
'@lhoestq I just benchmarked the two edits to `features.py` above, and they appear to solve the problem, bringing my original example to within 20% the speed of the output `"np"` example. Nice!\r\n\r\nFor a pull request, do you suggest simply following https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md ?'
'Cool ! Sure feel free to follow these instructions to open a PR :) thanks !'] | 2022-07-12 15:09:58 | 2022-08-11 14:27:00 | null | NONE | null | null | null | ## Describe the bug
`Dataset.map`, when fed a Huggingface Tokenizer as its map func, can sometimes spend huge amounts of time doing casts. A minimal example follows.
Not all usages suffer from this. For example, I profiled the preprocessor at https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb , and it did _not_ have this problem. However, I'm at a loss to figure out how it avoids it, as the example below is simple and minimal and still has this problem.
This casting, where it occurs, causes the `Dataset.map` to run approximately 7x slower than it runs for code which does not cause this casting.
This may be related to https://github.com/huggingface/datasets/issues/1046 . However, the tokenizer is _not_ set to return Tensors.
## Steps to reproduce the bug
A minimal, self-contained example to reproduce is below:
```python
import transformers
from transformers import AutoTokenizer
from datasets import load_dataset
import torch
import cProfile
pretrained = 'distilbert-base-uncased'
tokenizer = AutoTokenizer.from_pretrained(pretrained)
squad = load_dataset('squad')
squad_train = squad['train']
squad_tiny = squad_train.select(range(5000))
assert isinstance(tokenizer, transformers.PreTrainedTokenizerFast)
def tokenize(ds):
tokens = tokenizer(text=ds['question'],
text_pair=ds['context'],
add_special_tokens=True,
padding='max_length',
truncation='only_second',
max_length=160,
stride=32,
return_overflowing_tokens=True,
return_offsets_mapping=True,
)
return tokens
cmd = 'squad_tiny.map(tokenize, batched=True, remove_columns=squad_tiny.column_names)'
cProfile.run(cmd, sort='tottime')
```
## Actual results
The code works, but takes 10-25 sec per batch (about 7x slower than non-casting code), with the following profile. Note that `_cast_to_python_objects` is the culprit.
```
63524075 function calls (58206482 primitive calls) in 121.836 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
5274034/40 68.751 0.000 111.060 2.776 features.py:262(_cast_to_python_objects)
42223832 24.077 0.000 33.310 0.000 {built-in method builtins.isinstance}
16338/20 5.121 0.000 111.053 5.553 features.py:361(<listcomp>)
5274135 4.747 0.000 4.749 0.000 {built-in method _abc._abc_instancecheck}
80/40 4.731 0.059 116.292 2.907 {pyarrow.lib.array}
5274135 4.485 0.000 9.234 0.000 abc.py:96(__instancecheck__)
2661564/2645196 2.959 0.000 4.298 0.000 features.py:1081(_check_non_null_non_empty_recursive)
5 2.786 0.557 2.786 0.557 {method 'encode_batch' of 'tokenizers.Tokenizer' objects}
2668052 0.930 0.000 0.930 0.000 {built-in method builtins.len}
5000 0.930 0.000 0.938 0.000 tokenization_utils_fast.py:187(_convert_encoding)
5 0.750 0.150 0.808 0.162 {method 'to_pydict' of 'pyarrow.lib.Table' objects}
1 0.444 0.444 121.749 121.749 arrow_dataset.py:2501(_map_single)
40 0.375 0.009 116.291 2.907 arrow_writer.py:151(__arrow_array__)
10 0.066 0.007 0.066 0.007 {method 'write_batch' of 'pyarrow.lib._CRecordBatchWriter' objects}
1 0.060 0.060 121.835 121.835 fingerprint.py:409(wrapper)
11387/5715 0.049 0.000 0.175 0.000 {built-in method builtins.getattr}
36 0.049 0.001 0.049 0.001 {pyarrow._compute.call_function}
15000 0.040 0.000 0.040 0.000 _collections_abc.py:719(__iter__)
3 0.023 0.008 0.023 0.008 {built-in method _imp.create_dynamic}
77 0.020 0.000 0.020 0.000 {built-in method builtins.dir}
37 0.019 0.001 0.019 0.001 socket.py:543(send)
15 0.017 0.001 0.017 0.001 tokenization_utils_fast.py:460(<listcomp>)
432/421 0.015 0.000 0.024 0.000 traitlets.py:1388(_notify_observers)
5000 0.015 0.000 0.018 0.000 _collections_abc.py:672(keys)
51 0.014 0.000 0.042 0.001 traitlets.py:276(getmembers)
5 0.014 0.003 3.775 0.755 tokenization_utils_fast.py:392(_batch_encode_plus)
3/1 0.014 0.005 0.035 0.035 {built-in method _imp.exec_dynamic}
5 0.012 0.002 0.950 0.190 tokenization_utils_fast.py:438(<listcomp>)
31626 0.012 0.000 0.012 0.000 {method 'append' of 'list' objects}
1532/1001 0.011 0.000 0.189 0.000 traitlets.py:643(get)
5 0.009 0.002 3.796 0.759 arrow_dataset.py:2631(apply_function_on_filtered_inputs)
51 0.009 0.000 0.062 0.001 traitlets.py:1766(traits)
5 0.008 0.002 3.784 0.757 tokenization_utils_base.py:2632(batch_encode_plus)
368 0.007 0.000 0.044 0.000 traitlets.py:1715(_get_trait_default_generator)
26 0.007 0.000 0.022 0.001 traitlets.py:1186(setup_instance)
51 0.006 0.000 0.010 0.000 traitlets.py:1781(<listcomp>)
80/32 0.006 0.000 0.052 0.002 table.py:1758(cast_array_to_feature)
684 0.006 0.000 0.007 0.000 {method 'items' of 'dict' objects}
4344/1794 0.006 0.000 0.192 0.000 traitlets.py:675(__get__)
...
```
## Environment info
I observed this on both Google colab and my local workstation:
### Google colab
- `datasets` version: 2.3.2
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
### Local
- `datasets` version: 2.3.2
- Platform: Windows-7-6.1.7601-SP1
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4676/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4676/timeline | null | null | false |
286 | https://api.github.com/repos/huggingface/datasets/issues/4675 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4675/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4675/comments | https://api.github.com/repos/huggingface/datasets/issues/4675/events | https://github.com/huggingface/datasets/issues/4675 | 1,302,193,649 | I_kwDODunzps5NneXx | 4,675 | Unable to use dataset with PyTorch dataloader | {'login': 'BlueskyFR', 'id': 25421460, 'node_id': 'MDQ6VXNlcjI1NDIxNDYw', 'avatar_url': 'https://avatars.githubusercontent.com/u/25421460?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/BlueskyFR', 'html_url': 'https://github.com/BlueskyFR', 'followers_url': 'https://api.github.com/users/BlueskyFR/followers', 'following_url': 'https://api.github.com/users/BlueskyFR/following{/other_user}', 'gists_url': 'https://api.github.com/users/BlueskyFR/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/BlueskyFR/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/BlueskyFR/subscriptions', 'organizations_url': 'https://api.github.com/users/BlueskyFR/orgs', 'repos_url': 'https://api.github.com/users/BlueskyFR/repos', 'events_url': 'https://api.github.com/users/BlueskyFR/events{/privacy}', 'received_events_url': 'https://api.github.com/users/BlueskyFR/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | open | false | null | [] | null | ['Hi! `para_crawl` has a single column of type `Translation`, which stores translation dictionaries. These dictionaries can be stored in a NumPy array but not in a PyTorch tensor since PyTorch only supports numeric types. In `datasets`, the conversion to `torch` works as follows: \r\n1. convert PyArrow table to NumPy arrays \r\n2. convert NumPy arrays to Torch tensors. \r\n\r\nThe 2nd step is problematic for your case as `datasets` attempts to convert the array of dictionaries to a PyTorch tensor. One way to fix this is to use the [preprocessing logic](https://github.com/huggingface/transformers/blob/8581a798c0a48fca07b29ce2ca2ef55adcae8c7e/examples/pytorch/translation/run_translation.py#L440-L458) from the Transformers translation script. And on our side, I think we can replace a NumPy array of dicts with a dict of NumPy array if the feature type is `Translation`/`TranslationVariableLanguages` (one array for each language) to get the official PyTorch error message for strings in such case.'] | 2022-07-12 15:04:04 | 2022-07-14 14:17:46 | null | NONE | null | null | null | ## Describe the bug
When using `.with_format("torch")`, an arrow table is returned and I am unable to use it by passing it to a PyTorch DataLoader: please see the code below.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
ds = load_dataset(
"para_crawl",
name="enfr",
cache_dir="/tmp/test/",
split="train",
keep_in_memory=True,
)
dataloader = DataLoader(ds.with_format("torch"), num_workers=32)
print(next(iter(dataloader)))
```
Is there something I am doing wrong? The documentation does not say much about the behavior of `.with_format()` so I feel like I am a bit stuck here :-/
Thanks in advance for your help!
## Expected results
The code should run with no error
## Actual results
```
AttributeError: 'str' object has no attribute 'dtype'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-4.18.0-348.el8.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.4
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4675/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4675/timeline | null | null | false |
287 | https://api.github.com/repos/huggingface/datasets/issues/4674 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4674/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4674/comments | https://api.github.com/repos/huggingface/datasets/issues/4674/events | https://github.com/huggingface/datasets/issues/4674 | 1,301,294,844 | I_kwDODunzps5NkC78 | 4,674 | Issue loading datasets -- pyarrow.lib has no attribute | {'login': 'margotwagner', 'id': 39107794, 'node_id': 'MDQ6VXNlcjM5MTA3Nzk0', 'avatar_url': 'https://avatars.githubusercontent.com/u/39107794?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/margotwagner', 'html_url': 'https://github.com/margotwagner', 'followers_url': 'https://api.github.com/users/margotwagner/followers', 'following_url': 'https://api.github.com/users/margotwagner/following{/other_user}', 'gists_url': 'https://api.github.com/users/margotwagner/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/margotwagner/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/margotwagner/subscriptions', 'organizations_url': 'https://api.github.com/users/margotwagner/orgs', 'repos_url': 'https://api.github.com/users/margotwagner/repos', 'events_url': 'https://api.github.com/users/margotwagner/events{/privacy}', 'received_events_url': 'https://api.github.com/users/margotwagner/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | open | false | null | [] | null | ['Hi @margotwagner, thanks for reporting.\r\n\r\nUnfortunately, I\'m not able to reproduce your bug: in an environment with datasets-2.3.2 and pyarrow-8.0.0, I can load the datasets without any problem:\r\n```python\r\n>>> ds = load_dataset("glue", "cola")\r\n>>> ds\r\nDatasetDict({\r\n train: Dataset({\r\n features: [\'sentence\', \'label\', \'idx\'],\r\n num_rows: 8551\r\n })\r\n validation: Dataset({\r\n features: [\'sentence\', \'label\', \'idx\'],\r\n num_rows: 1043\r\n })\r\n test: Dataset({\r\n features: [\'sentence\', \'label\', \'idx\'],\r\n num_rows: 1063\r\n })\r\n})\r\n\r\n>>> import pyarrow\r\n>>> pyarrow.__version__\r\n8.0.0\r\n>>> from pyarrow.lib import IpcReadOptions\r\n>>> IpcReadOptions\r\npyarrow.lib.IpcReadOptions\r\n```\r\n\r\nI think you may have a problem in your Python environment: maybe you have also an old version of pyarrow that has precedence when importing it.\r\n\r\nCould you please check this (just after you tried to load the dataset and got the error)?\r\n```python\r\n>>> import pyarrow\r\n>>> pyarrow.__version__\r\n``` '] | 2022-07-11 22:10:44 | 2022-07-12 04:54:31 | null | NONE | null | null | null | ## Describe the bug
I am trying to load sentiment analysis datasets from huggingface, but any dataset I try to use via load_dataset, I get the same error:
`AttributeError: module 'pyarrow.lib' has no attribute 'IpcReadOptions'`
## Steps to reproduce the bug
```python
dataset = load_dataset("glue", "cola")
```
## Expected results
Download datasets without issue.
## Actual results
`AttributeError: module 'pyarrow.lib' has no attribute 'IpcReadOptions'`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.5
- PyArrow version: 8.0.0
- Pandas version: 1.1.0
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4674/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4674/timeline | null | null | false |
288 | https://api.github.com/repos/huggingface/datasets/issues/4673 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4673/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4673/comments | https://api.github.com/repos/huggingface/datasets/issues/4673/events | https://github.com/huggingface/datasets/issues/4673 | 1,301,010,331 | I_kwDODunzps5Ni9eb | 4,673 | load_datasets on csv returns everything as a string | {'login': 'courtneysprouse', 'id': 25102613, 'node_id': 'MDQ6VXNlcjI1MTAyNjEz', 'avatar_url': 'https://avatars.githubusercontent.com/u/25102613?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/courtneysprouse', 'html_url': 'https://github.com/courtneysprouse', 'followers_url': 'https://api.github.com/users/courtneysprouse/followers', 'following_url': 'https://api.github.com/users/courtneysprouse/following{/other_user}', 'gists_url': 'https://api.github.com/users/courtneysprouse/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/courtneysprouse/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/courtneysprouse/subscriptions', 'organizations_url': 'https://api.github.com/users/courtneysprouse/orgs', 'repos_url': 'https://api.github.com/users/courtneysprouse/repos', 'events_url': 'https://api.github.com/users/courtneysprouse/events{/privacy}', 'received_events_url': 'https://api.github.com/users/courtneysprouse/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | closed | false | null | [] | null | ['Hi @courtneysprouse, thanks for reporting.\r\n\r\nYes, you are right: by default the "csv" loader loads all columns as strings. \r\n\r\nYou could tweak this behavior by passing the `feature` argument to `load_dataset`, but it is also true that currently it is not possible to perform some kind of casts, due to lacking of implementation in PyArrow. For example:\r\n```python\r\nimport datasets\r\n\r\nfeatures = datasets.Features(\r\n {\r\n "tokens": datasets.Sequence(datasets.Value("string")),\r\n "ner_tags": datasets.Sequence(datasets.Value("int32")),\r\n }\r\n)\r\n\r\nnew_conll = datasets.load_dataset("csv", data_files="ner_conll.csv", features=features)\r\n```\r\ngives `ArrowNotImplementedError` error:\r\n```\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowNotImplementedError: Unsupported cast from string to list using function cast_list\r\n```\r\n\r\nOn the other hand, if you just would like to save and afterwards load your dataset, you could use `save_to_disk` and `load_from_disk` instead. These functions preserve all data types.\r\n```python\r\n>>> orig_conll.save_to_disk("ner_conll")\r\n\r\n>>> from datasets import load_from_disk\r\n\r\n>>> new_conll = load_from_disk("ner_conll")\r\n>>> new_conll\r\nDatasetDict({\r\n train: Dataset({\r\n features: [\'id\', \'tokens\', \'pos_tags\', \'chunk_tags\', \'ner_tags\'],\r\n num_rows: 14042\r\n })\r\n validation: Dataset({\r\n features: [\'id\', \'tokens\', \'pos_tags\', \'chunk_tags\', \'ner_tags\'],\r\n num_rows: 3251\r\n })\r\n test: Dataset({\r\n features: [\'id\', \'tokens\', \'pos_tags\', \'chunk_tags\', \'ner_tags\'],\r\n num_rows: 3454\r\n })\r\n})\r\n>>> new_conll["train"][0]\r\n{\'chunk_tags\': [11, 21, 11, 12, 21, 22, 11, 12, 0],\r\n \'id\': \'0\',\r\n \'ner_tags\': [3, 0, 7, 0, 0, 0, 7, 0, 0],\r\n \'pos_tags\': [22, 42, 16, 21, 35, 37, 16, 21, 7],\r\n \'tokens\': [\'EU\',\r\n \'rejects\',\r\n \'German\',\r\n \'call\',\r\n \'to\',\r\n \'boycott\',\r\n \'British\',\r\n \'lamb\',\r\n \'.\']}\r\n>>> new_conll["train"].features\r\n{\'chunk_tags\': Sequence(feature=ClassLabel(num_classes=23, names=[\'O\', \'B-ADJP\', \'I-ADJP\', \'B-ADVP\', \'I-ADVP\', \'B-CONJP\', \'I-CONJP\', \'B-INTJ\', \'I-INTJ\', \'B-LST\', \'I-LST\', \'B-NP\', \'I-NP\', \'B-PP\', \'I-PP\', \'B-PRT\', \'I-PRT\', \'B-SBAR\', \'I-SBAR\', \'B-UCP\', \'I-UCP\', \'B-VP\', \'I-VP\'], id=None), length=-1, id=None),\r\n \'id\': Value(dtype=\'string\', id=None),\r\n \'ner_tags\': Sequence(feature=ClassLabel(num_classes=9, names=[\'O\', \'B-PER\', \'I-PER\', \'B-ORG\', \'I-ORG\', \'B-LOC\', \'I-LOC\', \'B-MISC\', \'I-MISC\'], id=None), length=-1, id=None),\r\n \'pos_tags\': Sequence(feature=ClassLabel(num_classes=47, names=[\'"\', "\'\'", \'#\', \'$\', \'(\', \')\', \',\', \'.\', \':\', \'``\', \'CC\', \'CD\', \'DT\', \'EX\', \'FW\', \'IN\', \'JJ\', \'JJR\', \'JJS\', \'LS\', \'MD\', \'NN\', \'NNP\', \'NNPS\', \'NNS\', \'NN|SYM\', \'PDT\', \'POS\', \'PRP\', \'PRP$\', \'RB\', \'RBR\', \'RBS\', \'RP\', \'SYM\', \'TO\', \'UH\', \'VB\', \'VBD\', \'VBG\', \'VBN\', \'VBP\', \'VBZ\', \'WDT\', \'WP\', \'WP$\', \'WRB\'], id=None), length=-1, id=None),\r\n \'tokens\': Sequence(feature=Value(dtype=\'string\', id=None), length=-1, id=None)}\r\n```'
'Hi @albertvillanova!\r\n\r\nThanks so much for your suggestions! That worked! '] | 2022-07-11 17:30:24 | 2022-07-12 13:33:09 | 2022-07-12 13:33:08 | NONE | null | null | null | ## Describe the bug
If you use:
`conll_dataset.to_csv("ner_conll.csv")`
It will create a csv file with all of your data as expected, however when you load it with:
`conll_dataset = load_dataset("csv", data_files="ner_conll.csv")`
everything is read in as a string. For example if I look at everything in 'ner_tags' I get back `['[3 0 7 0 0 0 7 0 0]', '[1 2]', '[5 0]']` instead of what I originally saved which was `[[3, 0, 7, 0, 0, 0, 7, 0, 0], [1, 2], [5, 0]]`
I think maybe there is something funky going on with the csv delimiter
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
#load original conll dataset
orig_conll = load_dataset("conll2003")
#save original conll as a csv
orig_conll.to_csv("ner_conll.csv")
#reload conll data as a csv
new_conll = load_dataset("csv", data_files="ner_conll.csv")`
```
## Expected results
A clear and concise description of the expected results.
I would expect the data be returned as the data type I saved it as. I.e. if I save a list of ints
[[3, 0, 7, 0, 0, 0, 7, 0, 0]], I shouldnt get back a string ['[3 0 7 0 0 0 7 0 0]']
I also get back a string when I pass a list of strings ['EU', 'rejects', 'German', 'call', 'to', 'boycott', 'British', 'lamb', '.']
## Actual results
A list of strings `['[3 0 7 0 0 0 7 0 0]', '[1 2]', '[5 0]']`
A string "['EU' 'rejects' 'German' 'call' 'to' 'boycott' 'British' 'lamb' '.']"
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- PyArrow version: 8.0.0
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4673/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4673/timeline | null | completed | false |
289 | https://api.github.com/repos/huggingface/datasets/issues/4672 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4672/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4672/comments | https://api.github.com/repos/huggingface/datasets/issues/4672/events | https://github.com/huggingface/datasets/pull/4672 | 1,300,911,467 | PR_kwDODunzps47NEfV | 4,672 | Support extract 7-zip compressed data files | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'
"Cool! Can you please remove `Fix #3541` from the description as this PR doesn't add support for streaming/`iter_archive`, so it only partially addresses the issue?\r\n\r\nSide note:\r\nI think we can use `libarchive` (`libarchive-c` is a Python package with the bindings) for streaming 7z archives. The only issue with this lib is that it's tricky to install on Windows/Mac."] | 2022-07-11 15:56:51 | 2022-07-15 13:14:27 | 2022-07-15 13:02:07 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4672', 'html_url': 'https://github.com/huggingface/datasets/pull/4672', 'diff_url': 'https://github.com/huggingface/datasets/pull/4672.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4672.patch', 'merged_at': datetime.datetime(2022, 7, 15, 13, 2, 7)} | Fix partially #3541, fix #4670. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4672/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4672/timeline | null | null | true |
290 | https://api.github.com/repos/huggingface/datasets/issues/4671 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4671/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4671/comments | https://api.github.com/repos/huggingface/datasets/issues/4671/events | https://github.com/huggingface/datasets/issues/4671 | 1,300,385,909 | I_kwDODunzps5NglB1 | 4,671 | Dataset Viewer issue for wmt16 | {'login': 'lewtun', 'id': 26859204, 'node_id': 'MDQ6VXNlcjI2ODU5MjA0', 'avatar_url': 'https://avatars.githubusercontent.com/u/26859204?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lewtun', 'html_url': 'https://github.com/lewtun', 'followers_url': 'https://api.github.com/users/lewtun/followers', 'following_url': 'https://api.github.com/users/lewtun/following{/other_user}', 'gists_url': 'https://api.github.com/users/lewtun/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lewtun/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lewtun/subscriptions', 'organizations_url': 'https://api.github.com/users/lewtun/orgs', 'repos_url': 'https://api.github.com/users/lewtun/repos', 'events_url': 'https://api.github.com/users/lewtun/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lewtun/received_events', 'type': 'User', 'site_admin': False} | [{'id': 3470211881, 'node_id': 'LA_kwDODunzps7O1zsp', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer', 'name': 'dataset-viewer', 'color': 'E5583E', 'default': False, 'description': 'Related to the dataset viewer on huggingface.co'}] | open | false | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}] | null | ["Thanks for reporting, @lewtun.\r\n\r\n~We can't load the dataset locally, so I think this is an issue with the loading script (not the viewer).~\r\n\r\n We are investigating..."
'Recently, there was a merged PR related to this dataset:\r\n- #4554\r\n\r\nWe are looking at this...'
"Indeed, the above mentioned PR fixed the loading script (it was not working before).\r\n\r\nI'm forcing the refresh of the Viewer."
'Please note that the above mentioned PR also made an enhancement in the `datasets` library, required by this loading script. This enhancement will only be available to the Viewer once we make our next release.'] | 2022-07-11 08:34:11 | 2022-07-11 13:09:04 | null | MEMBER | null | null | null | ### Link
https://huggingface.co/datasets/wmt16
### Description
[Reported](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/12#62cb83f14c7f35284e796f9c) by a user of AutoTrain Evaluate. AFAIK this dataset was working 1-2 weeks ago, and I'm not sure how to interpret this error.
```
Status code: 400
Exception: NotImplementedError
Message: This is a abstract method
```
Thanks!
### Owner
No | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4671/reactions', 'total_count': 1, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 1} | https://api.github.com/repos/huggingface/datasets/issues/4671/timeline | null | null | false |
291 | https://api.github.com/repos/huggingface/datasets/issues/4670 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4670/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4670/comments | https://api.github.com/repos/huggingface/datasets/issues/4670/events | https://github.com/huggingface/datasets/issues/4670 | 1,299,984,246 | I_kwDODunzps5NfC92 | 4,670 | Can't extract files from `.7z` zipfile using `download_and_extract` | {'login': 'bhavitvyamalik', 'id': 19718818, 'node_id': 'MDQ6VXNlcjE5NzE4ODE4', 'avatar_url': 'https://avatars.githubusercontent.com/u/19718818?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/bhavitvyamalik', 'html_url': 'https://github.com/bhavitvyamalik', 'followers_url': 'https://api.github.com/users/bhavitvyamalik/followers', 'following_url': 'https://api.github.com/users/bhavitvyamalik/following{/other_user}', 'gists_url': 'https://api.github.com/users/bhavitvyamalik/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/bhavitvyamalik/subscriptions', 'organizations_url': 'https://api.github.com/users/bhavitvyamalik/orgs', 'repos_url': 'https://api.github.com/users/bhavitvyamalik/repos', 'events_url': 'https://api.github.com/users/bhavitvyamalik/events{/privacy}', 'received_events_url': 'https://api.github.com/users/bhavitvyamalik/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | closed | false | null | [] | null | ['Hi @bhavitvyamalik, thanks for reporting.\r\n\r\nYes, currently we do not support 7zip archive compression: I think we should.\r\n\r\nAs a workaround, you could uncompress it explicitly, like done in e.g. `samsum` dataset: \r\n\r\nhttps://github.com/huggingface/datasets/blob/fedf891a08bfc77041d575fad6c26091bc0fce52/datasets/samsum/samsum.py#L106-L110\r\n'
'Related to this issue: https://github.com/huggingface/datasets/issues/3541'
'Sure, let me look into and check what can be done. Will keep you guys updated here!'
"Initially, I thought of solving this without any external dependency. Almost everywhere I saw `lzma` can be used for this but there is a caveat that lzma doesn’t work with 7z archives but only single files. In my case the 7z archive has multiple files so it didn't work. Is it fine to use external library here?"
"Hi @bhavitvyamalik, thanks for your investigation.\r\n\r\nOn Monday, I started a PR that will eventually close this issue as well: I'm linking it to this.\r\n- #4672\r\n\r\nLet me know what you think. "] | 2022-07-10 18:16:49 | 2022-07-15 13:02:07 | 2022-07-15 13:02:07 | CONTRIBUTOR | null | null | null | ## Describe the bug
I'm adding a new dataset which is a `.7z` zip file in Google drive and contains 3 json files inside. I'm able to download the data files using `download_and_extract` but after downloading it throws this error:
```
>>> dataset = load_dataset("./datasets/mantis/")
Using custom data configuration default
Downloading and preparing dataset mantis/default to /Users/bhavitvyamalik/.cache/huggingface/datasets/mantis/default/1.1.0/611affa804ec53e2055a335cc1b8b213bb5a0b5142d919967729d5ee23c6bab4...
Downloading data: 100%|█████████████████████████████████████████████████████████| 77.2M/77.2M [00:23<00:00, 3.28MB/s]
/Users/bhavitvyamalik/.cache/huggingface/datasets/downloads/fc3d70123c9de8407587a59aa426c37819cf2bf016795d33270e8a1d558a34e6
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/bhavitvyamalik/Desktop/work/hf/datasets/src/datasets/load.py", line 1745, in load_dataset
use_auth_token=use_auth_token,
File "/Users/bhavitvyamalik/Desktop/work/hf/datasets/src/datasets/builder.py", line 595, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/Users/bhavitvyamalik/Desktop/work/hf/datasets/src/datasets/builder.py", line 690, in _download_and_prepare
) from None
OSError: Cannot find data file.
Original error:
[Errno 20] Not a directory: '/Users/bhavitvyamalik/.cache/huggingface/datasets/downloads/fc3d70123c9de8407587a59aa426c37819cf2bf016795d33270e8a1d558a34e6/merged_train.json'
```
just before generating the splits. I checked `fc3d70123c9de8407587a59aa426c37819cf2bf016795d33270e8a1d558a34e6` file and it's `7z` zip file (similar to downloaded Google drive file) which means it didn't get unzip. Do I need to unzip it separately and then pass the paths for train,dev,test files in `SplitGenerator`?
## Environment info
- `datasets` version: 1.18.4.dev0
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.8
- PyArrow version: 5.0.0 | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4670/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4670/timeline | null | completed | false |
292 | https://api.github.com/repos/huggingface/datasets/issues/4669 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4669/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4669/comments | https://api.github.com/repos/huggingface/datasets/issues/4669/events | https://github.com/huggingface/datasets/issues/4669 | 1,299,848,003 | I_kwDODunzps5NehtD | 4,669 | loading oscar-corpus/OSCAR-2201 raises an error | {'login': 'vitalyshalumov', 'id': 33824221, 'node_id': 'MDQ6VXNlcjMzODI0MjIx', 'avatar_url': 'https://avatars.githubusercontent.com/u/33824221?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/vitalyshalumov', 'html_url': 'https://github.com/vitalyshalumov', 'followers_url': 'https://api.github.com/users/vitalyshalumov/followers', 'following_url': 'https://api.github.com/users/vitalyshalumov/following{/other_user}', 'gists_url': 'https://api.github.com/users/vitalyshalumov/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/vitalyshalumov/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/vitalyshalumov/subscriptions', 'organizations_url': 'https://api.github.com/users/vitalyshalumov/orgs', 'repos_url': 'https://api.github.com/users/vitalyshalumov/repos', 'events_url': 'https://api.github.com/users/vitalyshalumov/events{/privacy}', 'received_events_url': 'https://api.github.com/users/vitalyshalumov/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | closed | false | null | [] | null | ['I had to use the appropriate token for use_auth_token. Thank you.'] | 2022-07-10 07:09:30 | 2022-07-11 09:27:49 | 2022-07-11 09:27:49 | NONE | null | null | null | ## Describe the bug
load_dataset('oscar-2201', 'af')
raises an error:
Traceback (most recent call last):
File "/usr/lib/python3.8/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "..python3.8/site-packages/datasets/load.py", line 1656, in load_dataset
builder_instance = load_dataset_builder(
File ".../lib/python3.8/site-packages/datasets/load.py", line 1439, in load_dataset_builder
dataset_module = dataset_module_factory(
File ".../lib/python3.8/site-packages/datasets/load.py", line 1189, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at .../oscar-2201/oscar-2201.py or any data file in the same directory. Couldn't find 'oscar-2201' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/oscar-2201/oscar-2201.py
I've tried other permutations such as :
oscar_22 = load_dataset('oscar-2201', 'af',use_auth_token=True)
oscar_22 = load_dataset('oscar-corpus/OSCAR-2201', 'af',use_auth_token=True)
oscar_22 = load_dataset('oscar-2201', 'af')
oscar_22 = load_dataset('oscar-corpus/OSCAR-2201')
with the same unfortunate result.
## Steps to reproduce the bug
oscar_22 = load_dataset('oscar-2201', 'af',use_auth_token=True)
oscar_22 = load_dataset('oscar-corpus/OSCAR-2201', 'af',use_auth_token=True)
oscar_22 = load_dataset('oscar-2201', 'af')
oscar_22 = load_dataset('oscar-corpus/OSCAR-2201')
# Sample code to reproduce the bug
```
## Expected results
loaded data
## Actual results
Traceback (most recent call last):
File "/usr/lib/python3.8/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "..python3.8/site-packages/datasets/load.py", line 1656, in load_dataset
builder_instance = load_dataset_builder(
File ".../lib/python3.8/site-packages/datasets/load.py", line 1439, in load_dataset_builder
dataset_module = dataset_module_factory(
File ".../lib/python3.8/site-packages/datasets/load.py", line 1189, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at .../oscar-2201/oscar-2201.py or any data file in the same directory. Couldn't find 'oscar-2201' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/oscar-2201/oscar-2201.py
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-5.13.0-37-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4669/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4669/timeline | null | completed | false |
293 | https://api.github.com/repos/huggingface/datasets/issues/4668 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4668/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4668/comments | https://api.github.com/repos/huggingface/datasets/issues/4668/events | https://github.com/huggingface/datasets/issues/4668 | 1,299,735,893 | I_kwDODunzps5NeGVV | 4,668 | Dataset Viewer issue for hungnm/multilingual-amazon-review-sentiment-processed | {'login': 'hungnmai', 'id': 21364546, 'node_id': 'MDQ6VXNlcjIxMzY0NTQ2', 'avatar_url': 'https://avatars.githubusercontent.com/u/21364546?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/hungnmai', 'html_url': 'https://github.com/hungnmai', 'followers_url': 'https://api.github.com/users/hungnmai/followers', 'following_url': 'https://api.github.com/users/hungnmai/following{/other_user}', 'gists_url': 'https://api.github.com/users/hungnmai/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/hungnmai/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/hungnmai/subscriptions', 'organizations_url': 'https://api.github.com/users/hungnmai/orgs', 'repos_url': 'https://api.github.com/users/hungnmai/repos', 'events_url': 'https://api.github.com/users/hungnmai/events{/privacy}', 'received_events_url': 'https://api.github.com/users/hungnmai/received_events', 'type': 'User', 'site_admin': False} | [{'id': 3470211881, 'node_id': 'LA_kwDODunzps7O1zsp', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer', 'name': 'dataset-viewer', 'color': 'E5583E', 'default': False, 'description': 'Related to the dataset viewer on huggingface.co'}] | closed | false | {'login': 'severo', 'id': 1676121, 'node_id': 'MDQ6VXNlcjE2NzYxMjE=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1676121?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/severo', 'html_url': 'https://github.com/severo', 'followers_url': 'https://api.github.com/users/severo/followers', 'following_url': 'https://api.github.com/users/severo/following{/other_user}', 'gists_url': 'https://api.github.com/users/severo/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/severo/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/severo/subscriptions', 'organizations_url': 'https://api.github.com/users/severo/orgs', 'repos_url': 'https://api.github.com/users/severo/repos', 'events_url': 'https://api.github.com/users/severo/events{/privacy}', 'received_events_url': 'https://api.github.com/users/severo/received_events', 'type': 'User', 'site_admin': False} | [{'login': 'severo', 'id': 1676121, 'node_id': 'MDQ6VXNlcjE2NzYxMjE=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1676121?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/severo', 'html_url': 'https://github.com/severo', 'followers_url': 'https://api.github.com/users/severo/followers', 'following_url': 'https://api.github.com/users/severo/following{/other_user}', 'gists_url': 'https://api.github.com/users/severo/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/severo/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/severo/subscriptions', 'organizations_url': 'https://api.github.com/users/severo/orgs', 'repos_url': 'https://api.github.com/users/severo/repos', 'events_url': 'https://api.github.com/users/severo/events{/privacy}', 'received_events_url': 'https://api.github.com/users/severo/received_events', 'type': 'User', 'site_admin': False}] | null | ['It seems like a private dataset. The viewer is currently not supported on the private datasets.'] | 2022-07-09 18:04:13 | 2022-07-11 07:47:47 | 2022-07-11 07:47:47 | NONE | null | null | null | ### Link
https://huggingface.co/hungnm/multilingual-amazon-review-sentiment
### Description
_No response_
### Owner
Yes | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4668/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4668/timeline | null | completed | false |
294 | https://api.github.com/repos/huggingface/datasets/issues/4667 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4667/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4667/comments | https://api.github.com/repos/huggingface/datasets/issues/4667/events | https://github.com/huggingface/datasets/issues/4667 | 1,299,735,703 | I_kwDODunzps5NeGSX | 4,667 | Dataset Viewer issue for hungnm/multilingual-amazon-review-sentiment-processed | {'login': 'hungnmai', 'id': 21364546, 'node_id': 'MDQ6VXNlcjIxMzY0NTQ2', 'avatar_url': 'https://avatars.githubusercontent.com/u/21364546?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/hungnmai', 'html_url': 'https://github.com/hungnmai', 'followers_url': 'https://api.github.com/users/hungnmai/followers', 'following_url': 'https://api.github.com/users/hungnmai/following{/other_user}', 'gists_url': 'https://api.github.com/users/hungnmai/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/hungnmai/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/hungnmai/subscriptions', 'organizations_url': 'https://api.github.com/users/hungnmai/orgs', 'repos_url': 'https://api.github.com/users/hungnmai/repos', 'events_url': 'https://api.github.com/users/hungnmai/events{/privacy}', 'received_events_url': 'https://api.github.com/users/hungnmai/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892865, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODY1', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/duplicate', 'name': 'duplicate', 'color': 'cfd3d7', 'default': True, 'description': 'This issue or pull request already exists'}] | closed | false | {'login': 'severo', 'id': 1676121, 'node_id': 'MDQ6VXNlcjE2NzYxMjE=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1676121?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/severo', 'html_url': 'https://github.com/severo', 'followers_url': 'https://api.github.com/users/severo/followers', 'following_url': 'https://api.github.com/users/severo/following{/other_user}', 'gists_url': 'https://api.github.com/users/severo/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/severo/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/severo/subscriptions', 'organizations_url': 'https://api.github.com/users/severo/orgs', 'repos_url': 'https://api.github.com/users/severo/repos', 'events_url': 'https://api.github.com/users/severo/events{/privacy}', 'received_events_url': 'https://api.github.com/users/severo/received_events', 'type': 'User', 'site_admin': False} | [{'login': 'severo', 'id': 1676121, 'node_id': 'MDQ6VXNlcjE2NzYxMjE=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1676121?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/severo', 'html_url': 'https://github.com/severo', 'followers_url': 'https://api.github.com/users/severo/followers', 'following_url': 'https://api.github.com/users/severo/following{/other_user}', 'gists_url': 'https://api.github.com/users/severo/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/severo/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/severo/subscriptions', 'organizations_url': 'https://api.github.com/users/severo/orgs', 'repos_url': 'https://api.github.com/users/severo/repos', 'events_url': 'https://api.github.com/users/severo/events{/privacy}', 'received_events_url': 'https://api.github.com/users/severo/received_events', 'type': 'User', 'site_admin': False}] | null | [] | 2022-07-09 18:03:15 | 2022-07-11 07:47:15 | 2022-07-11 07:47:15 | NONE | null | null | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4667/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4667/timeline | null | completed | false |
295 | https://api.github.com/repos/huggingface/datasets/issues/4666 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4666/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4666/comments | https://api.github.com/repos/huggingface/datasets/issues/4666/events | https://github.com/huggingface/datasets/issues/4666 | 1,299,732,238 | I_kwDODunzps5NeFcO | 4,666 | Issues with concatenating datasets | {'login': 'ChenghaoMou', 'id': 32014649, 'node_id': 'MDQ6VXNlcjMyMDE0NjQ5', 'avatar_url': 'https://avatars.githubusercontent.com/u/32014649?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/ChenghaoMou', 'html_url': 'https://github.com/ChenghaoMou', 'followers_url': 'https://api.github.com/users/ChenghaoMou/followers', 'following_url': 'https://api.github.com/users/ChenghaoMou/following{/other_user}', 'gists_url': 'https://api.github.com/users/ChenghaoMou/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/ChenghaoMou/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/ChenghaoMou/subscriptions', 'organizations_url': 'https://api.github.com/users/ChenghaoMou/orgs', 'repos_url': 'https://api.github.com/users/ChenghaoMou/repos', 'events_url': 'https://api.github.com/users/ChenghaoMou/events{/privacy}', 'received_events_url': 'https://api.github.com/users/ChenghaoMou/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | closed | false | null | [] | null | ['Hi! I agree we should improve the features equality checks to account for this particular case. However, your code fails due to `answer_start` having the dtype `int64` instead of `int32` after loading from JSON (it\'s not possible to embed type precision info into a JSON file; `save_to_disk` does that for arrow files), which would lead to the concatenation error as PyArrow does not support this sort of type promotion. This can be fixed as follows:\r\n```python\r\ntemp = load_dataset("json", data_files={"train": "output.jsonl"}, features=squad["train"].features)\r\n``` '
'That makes sense. I totally missed the `int64` and `int32` part. Thanks for pointing it out! Will close this issue for now.'] | 2022-07-09 17:45:14 | 2022-07-12 17:16:15 | 2022-07-12 17:16:14 | NONE | null | null | null | ## Describe the bug
It is impossible to concatenate datasets if a feature is sequence of dict in one dataset and a dict of sequence in another. But based on the document, it should be automatically converted.
> A [datasets.Sequence](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Sequence) with a internal dictionary feature will be automatically converted into a dictionary of lists. This behavior is implemented to have a compatilbity layer with the TensorFlow Datasets library but may be un-wanted in some cases. If you don’t want this behavior, you can use a python list instead of the [datasets.Sequence](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Sequence).
## Steps to reproduce the bug
```python
from datasets import concatenate_datasets, load_dataset
squad = load_dataset("squad_v2")
squad["train"].to_json("output.jsonl", lines=True)
temp = load_dataset("json", data_files={"train": "output.jsonl"})
concatenate_datasets([temp["train"], squad["train"]])
```
## Expected results
No error executing that code
## Actual results
```
ValueError: The features can't be aligned because the key answers of features {'id': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'context': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'answers': Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None)} has unexpected type - Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None) (expected either {'text': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'answer_start': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)} or Value("null").
```
## Environment info
- `datasets` version: 2.3.2
- Platform: macOS-12.4-arm64-arm-64bit
- Python version: 3.8.11
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4666/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4666/timeline | null | completed | false |
296 | https://api.github.com/repos/huggingface/datasets/issues/4665 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4665/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4665/comments | https://api.github.com/repos/huggingface/datasets/issues/4665/events | https://github.com/huggingface/datasets/issues/4665 | 1,299,652,638 | I_kwDODunzps5NdyAe | 4,665 | Unable to create dataset having Python dataset script only | {'login': 'aleSuglia', 'id': 1479733, 'node_id': 'MDQ6VXNlcjE0Nzk3MzM=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1479733?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/aleSuglia', 'html_url': 'https://github.com/aleSuglia', 'followers_url': 'https://api.github.com/users/aleSuglia/followers', 'following_url': 'https://api.github.com/users/aleSuglia/following{/other_user}', 'gists_url': 'https://api.github.com/users/aleSuglia/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/aleSuglia/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/aleSuglia/subscriptions', 'organizations_url': 'https://api.github.com/users/aleSuglia/orgs', 'repos_url': 'https://api.github.com/users/aleSuglia/repos', 'events_url': 'https://api.github.com/users/aleSuglia/events{/privacy}', 'received_events_url': 'https://api.github.com/users/aleSuglia/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | closed | false | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}] | null | ['Hi @aleSuglia, thanks for reporting.\r\n\r\nWe are having a look at it. \r\n\r\nWe transfer this issue to the Community tab of the corresponding Hub dataset: https://huggingface.co/datasets/Heriot-WattUniversity/dialog-babi/discussions'] | 2022-07-09 11:45:46 | 2022-07-11 07:10:09 | 2022-07-11 07:10:01 | CONTRIBUTOR | null | null | null | ## Describe the bug
Hi there,
I'm trying to add the following dataset to Huggingface datasets: https://huggingface.co/datasets/Heriot-WattUniversity/dialog-babi/blob/
I'm trying to do so using the CLI commands but seems that this command generates the wrong `dataset_info.json` file (you can find it in the repo already):
```
datasets-cli test Heriot-WattUniversity/dialog-babi/dialog_babi.py --save_infos --all-configs
```
while it errors when I remove the python script:
```
datasets-cli test Heriot-WattUniversity/dialog-babi/ --save_infos --all-configs
```
The error message is the following:
```
FileNotFoundError: Unable to resolve any data file that matches '['**']' at /Users/as2180/workspace/Heriot-WattUniversity/dialog-babi with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']
```
## Environment info
- `datasets` version: 2.3.2
- Platform: macOS-12.4-arm64-arm-64bit
- Python version: 3.9.9
- PyArrow version: 8.0.0
- Pandas version: 1.4.3 | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4665/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4665/timeline | null | completed | false |
297 | https://api.github.com/repos/huggingface/datasets/issues/4664 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4664/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4664/comments | https://api.github.com/repos/huggingface/datasets/issues/4664/events | https://github.com/huggingface/datasets/pull/4664 | 1,299,571,212 | PR_kwDODunzps47IvfG | 4,664 | Add stanford dog dataset | {'login': 'khushmeeet', 'id': 8711912, 'node_id': 'MDQ6VXNlcjg3MTE5MTI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8711912?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/khushmeeet', 'html_url': 'https://github.com/khushmeeet', 'followers_url': 'https://api.github.com/users/khushmeeet/followers', 'following_url': 'https://api.github.com/users/khushmeeet/following{/other_user}', 'gists_url': 'https://api.github.com/users/khushmeeet/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/khushmeeet/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/khushmeeet/subscriptions', 'organizations_url': 'https://api.github.com/users/khushmeeet/orgs', 'repos_url': 'https://api.github.com/users/khushmeeet/repos', 'events_url': 'https://api.github.com/users/khushmeeet/events{/privacy}', 'received_events_url': 'https://api.github.com/users/khushmeeet/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'
"Hi @khushmeeet, thanks for your contribution.\r\n\r\nBut wouldn't it be better to add this dataset to the Hub? \r\n- https://huggingface.co/docs/datasets/share\r\n- https://huggingface.co/docs/datasets/dataset_script"
'Hi @albertvillanova \r\n\r\nDataset is added to Hub - https://huggingface.co/datasets/dgrnd4/stanford_dog_dataset'
'Great, so I guess we can close this issue, as the dataset is already available on the Hub.'
'OK I read the discussion on:\r\n- #4504\r\n\r\nCurrently, priority is adding datasets to the Hub, not here on GitHub.\r\n\r\nIf you would like to contribute the loading script and all the metadata you generated (README + JSON files), you could:\r\n- Either make a PR to the existing dataset on the Hub\r\n- Create a new dataset on the Hub:\r\n - Either under your personal namespace\r\n - or even more professionally, under the namespace `stanfordSVL` (Stanford Vision and Learning Lab: https://svl.stanford.edu/)\r\n\r\nYou can use the Community tab to ping us if you need help or have any questions.'] | 2022-07-09 04:46:07 | 2022-07-15 13:30:32 | 2022-07-15 13:15:42 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4664', 'html_url': 'https://github.com/huggingface/datasets/pull/4664', 'diff_url': 'https://github.com/huggingface/datasets/pull/4664.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4664.patch', 'merged_at': None} | This PR is for adding dataset, related to issue #4504.
We are adding Stanford dog breed dataset. It is a multi class image classification dataset.
Details can be found here - http://vision.stanford.edu/aditya86/ImageNetDogs/
Tests on dummy data is failing currently, which I am looking into. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4664/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4664/timeline | null | null | true |
298 | https://api.github.com/repos/huggingface/datasets/issues/4663 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4663/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4663/comments | https://api.github.com/repos/huggingface/datasets/issues/4663/events | https://github.com/huggingface/datasets/pull/4663 | 1,299,298,693 | PR_kwDODunzps47H19n | 4,663 | Add text decorators | {'login': 'stevhliu', 'id': 59462357, 'node_id': 'MDQ6VXNlcjU5NDYyMzU3', 'avatar_url': 'https://avatars.githubusercontent.com/u/59462357?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/stevhliu', 'html_url': 'https://github.com/stevhliu', 'followers_url': 'https://api.github.com/users/stevhliu/followers', 'following_url': 'https://api.github.com/users/stevhliu/following{/other_user}', 'gists_url': 'https://api.github.com/users/stevhliu/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/stevhliu/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/stevhliu/subscriptions', 'organizations_url': 'https://api.github.com/users/stevhliu/orgs', 'repos_url': 'https://api.github.com/users/stevhliu/repos', 'events_url': 'https://api.github.com/users/stevhliu/events{/privacy}', 'received_events_url': 'https://api.github.com/users/stevhliu/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-08 17:51:48 | 2022-07-18 18:33:14 | 2022-07-18 18:20:49 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4663', 'html_url': 'https://github.com/huggingface/datasets/pull/4663', 'diff_url': 'https://github.com/huggingface/datasets/pull/4663.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4663.patch', 'merged_at': datetime.datetime(2022, 7, 18, 18, 20, 49)} | This PR adds some decoration to text about different modalities to make it more obvious separate guides exist for audio, vision, and text. The goal is to make it easier for users to discover these guides!
![underline](https://user-images.githubusercontent.com/59462357/178044392-9596693e-9a4a-479a-a282-f1edbd90be1a.png)
TODO:
- [x] Open PR to support new Tailwind classes | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4663/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4663/timeline | null | null | true |
299 | https://api.github.com/repos/huggingface/datasets/issues/4662 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4662/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4662/comments | https://api.github.com/repos/huggingface/datasets/issues/4662/events | https://github.com/huggingface/datasets/pull/4662 | 1,298,845,369 | PR_kwDODunzps47GTEc | 4,662 | Fix: conll2003 - fix empty example | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False} | [] | closed | false | null | [] | null | ['_The documentation is not available anymore as the PR was closed or merged._'] | 2022-07-08 10:49:13 | 2022-07-08 14:14:53 | 2022-07-08 14:02:42 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/4662', 'html_url': 'https://github.com/huggingface/datasets/pull/4662', 'diff_url': 'https://github.com/huggingface/datasets/pull/4662.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/4662.patch', 'merged_at': datetime.datetime(2022, 7, 8, 14, 2, 42)} | As reported in https://huggingface.co/datasets/conll2003/discussions/2#62c45a14f93fc97e8260532f, there was an extra empty example at the end of the dataset | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/4662/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/4662/timeline | null | null | true |