url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.12B
node_id
stringlengths
18
32
number
int64
1
3.66k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
int64
1,587B
1,644B
updated_at
int64
1,587B
1,644B
closed_at
int64
1,587B
1,644B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/3356
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3356/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3356/comments
https://api.github.com/repos/huggingface/datasets/issues/3356/events
https://github.com/huggingface/datasets/pull/3356
1,068,503,932
PR_kwDODunzps4vQQLD
3,356
to_tf_dataset() refactor
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Also, please don't merge yet - I need to make sure all the code samples and notebooks have a collate_fn specified, since we're removing the ability for this method to work without one!", "Hi @lhoestq @mariosasko, the other PRs this was depending on in Transformers and huggingface/notebooks are now merged, so this is ready to go. Do you want to take one more look at it, or are you happy at this point?", "The documentation for the method is fine, it doesn't need to be changed, but the tutorial notebook definitely looks a little out of date. Let me see what I can do!", "@lhoestq I rewrote the last bit of the notebook - let me know what you think!", "Cool thank you ! It's much nicer that what we had :)\r\n\r\nI also spotted other things I'd like to update in the notebook (especially the beginning) but it can be fixed later" ]
1,638,370,470,000
1,639,045,613,000
1,639,045,613,000
CONTRIBUTOR
null
This is the promised cleanup to `to_tf_dataset()` now that the course is out of the way! The main changes are: - A collator is always required (there was way too much hackiness making things like labels work without it) - Lots of cleanup and a lot of code moved to `_get_output_signature` - Should now handle it gracefully when the data collator adds unexpected columns
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3356/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 3, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3356/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3356", "html_url": "https://github.com/huggingface/datasets/pull/3356", "diff_url": "https://github.com/huggingface/datasets/pull/3356.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3356.patch", "merged_at": 1639045613000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3355
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3355/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3355/comments
https://api.github.com/repos/huggingface/datasets/issues/3355/events
https://github.com/huggingface/datasets/pull/3355
1,068,468,573
PR_kwDODunzps4vQIoy
3,355
Extend support for streaming datasets that use pd.read_excel
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "TODO in the future: https://github.com/huggingface/datasets/pull/3355#discussion_r761138011\r\n- If we finally find a use case where the `pd.read_excel()` can work in streaming mode (using fsspec), that is, without using the `.read()`, I propose to try this first, catch the ValueError and then try with `.read`, but all implemented in `xpandas_read_excel`. " ]
1,638,368,563,000
1,639,725,859,000
1,639,725,858,000
MEMBER
null
This PR fixes error: ``` ValueError: Cannot seek streaming HTTP file ``` CC: @severo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3355/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3355/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3355", "html_url": "https://github.com/huggingface/datasets/pull/3355", "diff_url": "https://github.com/huggingface/datasets/pull/3355.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3355.patch", "merged_at": 1639725858000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3354
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3354/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3354/comments
https://api.github.com/repos/huggingface/datasets/issues/3354/events
https://github.com/huggingface/datasets/pull/3354
1,068,307,271
PR_kwDODunzps4vPl9d
3,354
Remove duplicate name from dataset cards
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,638,359,140,000
1,638,364,470,000
1,638,364,469,000
MEMBER
null
Remove duplicate name from dataset card for: - ajgt_twitter_ar - emotone_ar
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3354/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3354/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3354", "html_url": "https://github.com/huggingface/datasets/pull/3354", "diff_url": "https://github.com/huggingface/datasets/pull/3354.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3354.patch", "merged_at": 1638364469000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3353
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3353/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3353/comments
https://api.github.com/repos/huggingface/datasets/issues/3353/events
https://github.com/huggingface/datasets/issues/3353
1,068,173,783
I_kwDODunzps4_qwnX
3,353
add one field "example_id", but I can't see it in the "comput_loss" function
{ "login": "yanllearnn", "id": 38966558, "node_id": "MDQ6VXNlcjM4OTY2NTU4", "avatar_url": "https://avatars.githubusercontent.com/u/38966558?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanllearnn", "html_url": "https://github.com/yanllearnn", "followers_url": "https://api.github.com/users/yanllearnn/followers", "following_url": "https://api.github.com/users/yanllearnn/following{/other_user}", "gists_url": "https://api.github.com/users/yanllearnn/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanllearnn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanllearnn/subscriptions", "organizations_url": "https://api.github.com/users/yanllearnn/orgs", "repos_url": "https://api.github.com/users/yanllearnn/repos", "events_url": "https://api.github.com/users/yanllearnn/events{/privacy}", "received_events_url": "https://api.github.com/users/yanllearnn/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hi ! Your function looks fine, I used to map `squad` locally and it indeed added the `example_id` field correctly.\r\n\r\nHowever I think that in the `compute_loss` method only a subset of the fields are available: the model inputs. Since `example_id` is not a model input (it's not passed as a parameter to the model), the data loader doesn't need to return it by default.\r\n\r\nHowever you can disable this behavior by setting `remove_unused_columns` to `False` to your training arguments. In this case in `compute_loss` you will get the full item with all the fields.\r\n\r\nNote that since the model doesn't take `example_id` as input, you will have to remove it from the inputs when `model(**inputs)` is called", "Hi, I have set **args.remove_unused_columns=False** and **training_args.remove_unused_columns=False**, but the field doesn't been contained yet.\r\n```\r\ndef main():\r\n argp = HfArgumentParser(TrainingArguments)\r\n # The HfArgumentParser object collects command-line arguments into an object (and provides default values for unspecified arguments).\r\n # In particular, TrainingArguments has several keys that you'll need/want to specify (when you call run.py from the command line):\r\n # --do_train\r\n # When included, this argument tells the script to train a model.\r\n # See docstrings for \"--task\" and \"--dataset\" for how the training dataset is selected.\r\n # --do_eval\r\n # When included, this argument tells the script to evaluate the trained/loaded model on the validation split of the selected dataset.\r\n # --per_device_train_batch_size <int, default=8>\r\n # This is the training batch size.\r\n # If you're running on GPU, you should try to make this as large as you can without getting CUDA out-of-memory errors.\r\n # For reference, with --max_length=128 and the default ELECTRA-small model, a batch size of 32 should fit in 4gb of GPU memory.\r\n # --num_train_epochs <float, default=3.0>\r\n # How many passes to do through the training data.\r\n # --output_dir <path>\r\n # Where to put the trained model checkpoint(s) and any eval predictions.\r\n # *This argument is required*.\r\n\r\n argp.add_argument('--model', type=str,\r\n default='google/electra-small-discriminator',\r\n help=\"\"\"This argument specifies the base model to fine-tune.\r\n This should either be a HuggingFace model ID (see https://huggingface.co/models)\r\n or a path to a saved model checkpoint (a folder containing config.json and pytorch_model.bin).\"\"\")\r\n argp.add_argument('--task', type=str, choices=['nli', 'qa'], required=True,\r\n help=\"\"\"This argument specifies which task to train/evaluate on.\r\n Pass \"nli\" for natural language inference or \"qa\" for question answering.\r\n By default, \"nli\" will use the SNLI dataset, and \"qa\" will use the SQuAD dataset.\"\"\")\r\n argp.add_argument('--dataset', type=str, default=None,\r\n help=\"\"\"This argument overrides the default dataset used for the specified task.\"\"\")\r\n argp.add_argument('--max_length', type=int, default=128,\r\n help=\"\"\"This argument limits the maximum sequence length used during training/evaluation.\r\n Shorter sequence lengths need less memory and computation time, but some examples may end up getting truncated.\"\"\")\r\n argp.add_argument('--max_train_samples', type=int, default=None,\r\n help='Limit the number of examples to train on.')\r\n argp.add_argument('--max_eval_samples', type=int, default=None,\r\n help='Limit the number of examples to evaluate on.')\r\n\r\n argp.remove_unused_columns = False\r\n training_args, args = argp.parse_args_into_dataclasses()\r\n args.remove_unused_columns=False\r\n training_args.remove_unused_columns=False\r\n```\r\n\r\n\r\n```\r\n**************** train_dataset: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 87599\r\n})\r\n\r\n\r\n**************** train_dataset_featurized: Dataset({\r\n features: ['attention_mask', 'end_positions', 'input_ids', 'start_positions', 'token_type_ids'],\r\n num_rows: 87714\r\n})\r\n```", "Hi, I print the value, all are set to False, but don't work.\r\n```\r\n********************* training_args: TrainingArguments(\r\n_n_gpu=1,\r\nadafactor=False,\r\nadam_beta1=0.9,\r\nadam_beta2=0.999,\r\nadam_epsilon=1e-08,\r\ndataloader_drop_last=False,\r\ndataloader_num_workers=0,\r\ndataloader_pin_memory=True,\r\nddp_find_unused_parameters=None,\r\ndebug=[],\r\ndeepspeed=None,\r\ndisable_tqdm=False,\r\ndo_eval=False,\r\ndo_predict=False,\r\ndo_train=True,\r\neval_accumulation_steps=None,\r\neval_steps=None,\r\nevaluation_strategy=IntervalStrategy.NO,\r\nfp16=False,\r\nfp16_backend=auto,\r\nfp16_full_eval=False,\r\nfp16_opt_level=O1,\r\ngradient_accumulation_steps=1,\r\ngreater_is_better=None,\r\ngroup_by_length=False,\r\nignore_data_skip=False,\r\nlabel_names=None,\r\nlabel_smoothing_factor=0.0,\r\nlearning_rate=5e-05,\r\nlength_column_name=length,\r\nload_best_model_at_end=False,\r\nlocal_rank=-1,\r\nlog_level=-1,\r\nlog_level_replica=-1,\r\nlog_on_each_node=True,\r\nlogging_dir=./re_trained_model/runs/Dec01_14-15-08_399b9290604c,\r\nlogging_first_step=False,\r\nlogging_steps=500,\r\nlogging_strategy=IntervalStrategy.STEPS,\r\nlr_scheduler_type=SchedulerType.LINEAR,\r\nmax_grad_norm=1.0,\r\nmax_steps=-1,\r\nmetric_for_best_model=None,\r\nmp_parameters=,\r\nno_cuda=False,\r\nnum_train_epochs=3.0,\r\noutput_dir=./re_trained_model,\r\noverwrite_output_dir=False,\r\npast_index=-1,\r\nper_device_eval_batch_size=8,\r\nper_device_train_batch_size=8,\r\nprediction_loss_only=False,\r\npush_to_hub=False,\r\npush_to_hub_model_id=re_trained_model,\r\npush_to_hub_organization=None,\r\npush_to_hub_token=None,\r\nremove_unused_columns=False,\r\nreport_to=['tensorboard'],\r\nresume_from_checkpoint=None,\r\nrun_name=./re_trained_model,\r\nsave_on_each_node=False,\r\nsave_steps=500,\r\nsave_strategy=IntervalStrategy.STEPS,\r\nsave_total_limit=None,\r\nseed=42,\r\nsharded_ddp=[],\r\nskip_memory_metrics=True,\r\ntpu_metrics_debug=False,\r\ntpu_num_cores=None,\r\nuse_legacy_prediction_loop=False,\r\nwarmup_ratio=0.0,\r\nwarmup_steps=0,\r\nweight_decay=0.0,\r\n)\r\n```\r\n```\r\n********************* args: Namespace(dataset='squad', max_eval_samples=None, max_length=128, max_train_samples=None, model='google/electra-small-discriminator', remove_unused_columns=False, task='qa')\r\n2021-12-01 14:15:10,048 - WARNING - datasets.builder - Reusing dataset squad (/root/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)\r\nSome weights of the model checkpoint at google/electra-small-discriminator were not used when initializing ElectraForQuestionAnswering: ['discriminator_predictions.dense_prediction.weight', 'discriminator_predictions.dense_prediction.bias', 'discriminator_predictions.dense.weight', 'discriminator_predictions.dense.bias']\r\n- This IS expected if you are initializing ElectraForQuestionAnswering from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing ElectraForQuestionAnswering from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of ElectraForQuestionAnswering were not initialized from the model checkpoint at google/electra-small-discriminator and are newly initialized: ['qa_outputs.bias', 'qa_outputs.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nPreprocessing data... (this takes a little bit, should only happen once per dataset)\r\n```", "Hmmm, it might be because the default data collator removes all the fields with `string` type:\r\n\r\nhttps://github.com/huggingface/transformers/blob/4c0dd199c8305903564c2edeae23d294edd4b321/src/transformers/data/data_collator.py#L107-L112\r\n\r\nI guess you also need a custom data collator that doesn't remove them.", "can you give a tutorial about how to do this?", "I overwrite **get_train_dataloader**, and remove **_remove_unused_columns**, but it doesn't work.\r\n\r\n```\r\n def get_train_dataloader(self) -> DataLoader:\r\n \"\"\"\r\n Returns the training :class:`~torch.utils.data.DataLoader`.\r\n\r\n Will use no sampler if :obj:`self.train_dataset` does not implement :obj:`__len__`, a random sampler (adapted\r\n to distributed training if necessary) otherwise.\r\n\r\n Subclass and override this method if you want to inject some custom behavior.\r\n \"\"\"\r\n if self.train_dataset is None:\r\n raise ValueError(\"Trainer: training requires a train_dataset.\")\r\n\r\n train_dataset = self.train_dataset\r\n # if is_datasets_available() and isinstance(train_dataset, datasets.Dataset):\r\n # train_dataset = self._remove_unused_columns(train_dataset, description=\"training\")\r\n\r\n if isinstance(train_dataset, torch.utils.data.IterableDataset):\r\n if self.args.world_size > 1:\r\n train_dataset = IterableDatasetShard(\r\n train_dataset,\r\n batch_size=self.args.train_batch_size,\r\n drop_last=self.args.dataloader_drop_last,\r\n num_processes=self.args.world_size,\r\n process_index=self.args.process_index,\r\n )\r\n\r\n return DataLoader(\r\n train_dataset,\r\n batch_size=self.args.train_batch_size,\r\n collate_fn=self.data_collator,\r\n num_workers=self.args.dataloader_num_workers,\r\n pin_memory=self.args.dataloader_pin_memory,\r\n )\r\n\r\n train_sampler = self._get_train_sampler()\r\n\r\n return DataLoader(\r\n train_dataset,\r\n batch_size=self.args.train_batch_size,\r\n sampler=train_sampler,\r\n collate_fn=self.data_collator,\r\n drop_last=self.args.dataloader_drop_last,\r\n num_workers=self.args.dataloader_num_workers,\r\n pin_memory=self.args.dataloader_pin_memory,\r\n )\r\n```", "Hi, it works now, thank you.\r\n1. **args.remove_unused_columns=False** and **training_args.remove_unused_columns=False**\r\n2. overwrite **get_train_dataloader**, and remove **_remove_unused_columns**\r\n3. add new fields, and can be got in **inputs**. " ]
1,638,351,309,000
1,638,374,559,000
1,638,374,559,000
NONE
null
Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs ``` *********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], ..., [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0]], device='cuda:0'), 'end_positions': tensor([ 25, 97, 93, 44, 25, 112, 109, 134], device='cuda:0'), 'input_ids': tensor([[ 101, 2054, 2390, ..., 0, 0, 0], [ 101, 2054, 2515, ..., 0, 0, 0], [ 101, 2054, 2106, ..., 0, 0, 0], ..., [ 101, 2339, 2001, ..., 0, 0, 0], [ 101, 2054, 2515, ..., 0, 0, 0], [ 101, 2054, 2003, ..., 0, 0, 0]], device='cuda:0'), 'start_positions': tensor([ 20, 90, 89, 41, 25, 96, 106, 132], device='cuda:0'), 'token_type_ids': tensor([[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], device='cuda:0')} ``` ``` # This function preprocesses a question answering dataset, tokenizing the question and context text # and finding the right offsets for the answer spans in the tokenized context (to use as labels). # Adapted from https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py def prepare_train_dataset_qa(examples, tokenizer, max_seq_length=None): questions = [q.lstrip() for q in examples["question"]] max_seq_length = tokenizer.model_max_length # tokenize both questions and the corresponding context # if the context length is longer than max_length, we split it to several # chunks of max_length tokenized_examples = tokenizer( questions, examples["context"], truncation="only_second", max_length=max_seq_length, stride=min(max_seq_length // 2, 128), return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length" ) # Since one example might give us several features if it has a long context, # we need a map from a feature to its corresponding example. sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping") # The offset mappings will give us a map from token to character position # in the original context. This will help us compute the start_positions # and end_positions to get the final answer string. offset_mapping = tokenized_examples.pop("offset_mapping") tokenized_examples["start_positions"] = [] tokenized_examples["end_positions"] = [] tokenized_examples["example_id"] = [] for i, offsets in enumerate(offset_mapping): input_ids = tokenized_examples["input_ids"][i] # We will label features not containing the answer the index of the CLS token. cls_index = input_ids.index(tokenizer.cls_token_id) sequence_ids = tokenized_examples.sequence_ids(i) # from the feature idx to sample idx sample_index = sample_mapping[i] # get the answer for a feature answers = examples["answers"][sample_index] tokenized_examples["example_id"].append(examples["id"][sample_index]) if len(answers["answer_start"]) == 0: tokenized_examples["start_positions"].append(cls_index) tokenized_examples["end_positions"].append(cls_index) else: # Start/end character index of the answer in the text. start_char = answers["answer_start"][0] end_char = start_char + len(answers["text"][0]) # Start token index of the current span in the text. token_start_index = 0 while sequence_ids[token_start_index] != 1: token_start_index += 1 # End token index of the current span in the text. token_end_index = len(input_ids) - 1 while sequence_ids[token_end_index] != 1: token_end_index -= 1 # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index). if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char): tokenized_examples["start_positions"].append(cls_index) tokenized_examples["end_positions"].append(cls_index) else: # Otherwise move the token_start_index and token_end_index to the two ends of the answer. # Note: we could go after the last offset if the answer is the last word (edge case). while token_start_index < len(offsets) and \ offsets[token_start_index][0] <= start_char: token_start_index += 1 tokenized_examples["start_positions"].append( token_start_index - 1) while offsets[token_end_index][1] >= end_char: token_end_index -= 1 tokenized_examples["end_positions"].append(token_end_index + 1) return tokenized_examples ``` _Originally posted by @yanllearnn in https://github.com/huggingface/datasets/issues/3333#issuecomment-983457161_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3353/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3353/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3352
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3352/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3352/comments
https://api.github.com/repos/huggingface/datasets/issues/3352/events
https://github.com/huggingface/datasets/pull/3352
1,068,102,994
PR_kwDODunzps4vO6uZ
3,352
Make LABR dataset streamable
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,638,346,947,000
1,638,355,742,000
1,638,355,741,000
MEMBER
null
Fix LABR dataset to make it streamable. Related to: #3350.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3352/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3352/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3352", "html_url": "https://github.com/huggingface/datasets/pull/3352", "diff_url": "https://github.com/huggingface/datasets/pull/3352.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3352.patch", "merged_at": 1638355741000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3351
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3351/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3351/comments
https://api.github.com/repos/huggingface/datasets/issues/3351/events
https://github.com/huggingface/datasets/pull/3351
1,068,094,873
PR_kwDODunzps4vO5AS
3,351
Add VCTK dataset
{ "login": "jaketae", "id": 25360440, "node_id": "MDQ6VXNlcjI1MzYwNDQw", "avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jaketae", "html_url": "https://github.com/jaketae", "followers_url": "https://api.github.com/users/jaketae/followers", "following_url": "https://api.github.com/users/jaketae/following{/other_user}", "gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}", "starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jaketae/subscriptions", "organizations_url": "https://api.github.com/users/jaketae/orgs", "repos_url": "https://api.github.com/users/jaketae/repos", "events_url": "https://api.github.com/users/jaketae/events{/privacy}", "received_events_url": "https://api.github.com/users/jaketae/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hello @patrickvonplaten, I hope it's okay to ping you with a (dumb) question!\r\n\r\nI've been trying to get `dl_manager.download_and_extract(_DL_URL)` to work with no avail. I verified that this is a problem on two different machines (lab server, GCP), so I doubt it's an issue with network connectivity. Here is the full trace.\r\n\r\n```\r\n(venv) (base) jaketae@jake-gpu1:~/documents/datasets$ datasets-cli test datasets/vctk --save_infos --all_configs\r\nTesting builder 'main' (1/1)\r\nDownloading and preparing dataset vctk/main to /home/jaketae/.cache/huggingface/datasets/vctk/main/0.9.2/2bfa52a93469fa9d6d4b1831c6511db5442b9f4e48620aef2bc3890d7a5268a8...\r\nTraceback (most recent call last):\r\n File \"/home/jaketae/documents/datasets/venv/bin/datasets-cli\", line 33, in <module>\r\n sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')())\r\n File \"/home/jaketae/documents/datasets/src/datasets/commands/datasets_cli.py\", line 33, in main\r\n service.run()\r\n File \"/home/jaketae/documents/datasets/src/datasets/commands/test.py\", line 146, in run\r\n builder.download_and_prepare(\r\n File \"/home/jaketae/documents/datasets/src/datasets/builder.py\", line 593, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/jaketae/documents/datasets/src/datasets/builder.py\", line 659, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/home/jaketae/.cache/huggingface/modules/datasets_modules/datasets/vctk/2bfa52a93469fa9d6d4b1831c6511db5442b9f4e48620aef2bc3890d7a5268a8/vctk.py\", line 76, in _split_generators\r\n root_path = dl_manager.download_and_extract(_DL_URL)\r\n File \"/home/jaketae/documents/datasets/src/datasets/utils/download_manager.py\", line 283, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/home/jaketae/documents/datasets/src/datasets/utils/download_manager.py\", line 195, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/home/jaketae/documents/datasets/src/datasets/utils/py_utils.py\", line 234, in map_nested\r\n return function(data_struct)\r\n File \"/home/jaketae/documents/datasets/src/datasets/utils/download_manager.py\", line 216, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/home/jaketae/documents/datasets/src/datasets/utils/file_utils.py\", line 298, in cached_path\r\n output_path = get_from_cache(\r\n File \"/home/jaketae/documents/datasets/src/datasets/utils/file_utils.py\", line 608, in get_from_cache\r\n raise ConnectionError(f\"Couldn't reach {url}\")\r\nConnectionError: Couldn't reach https://datashare.is.ed.ac.uk/bitstream/handle/10283/3443/VCTK-Corpus-0.92.zip\r\n```\r\n\r\nOn my local, however, the URL correctly points to the download zip file. My admittedly naive guess is that the website is web-crawler or scraper proof (requiring specific headers, etc.), but I also think I might have just missed a very basic step in the process.\r\n\r\nApologies for the delayed PR, and TIA for the help!", "Hey @jaketae, \r\n\r\nHmm, yeah I don't know really either - the link also works correctly for me when doing:\r\n\r\n```\r\nwget https://datashare.is.ed.ac.uk/bitstream/handle/10283/3443/VCTK-Corpus-0.92.zip\r\n```\r\n\r\nI think however that I had a similar problem previously with Edinburgh's (`.ed.ac.uk`) websites which I solved with the following hack. Not sure if this could be the same problem here...\r\nhttps://github.com/huggingface/datasets/blob/e1104ad5d3e83f8b1571e0d6fef4fdabf0a1fde5/datasets/ami/ami.py#L364\r\n\r\n", "The AMI dataset is stored under a different website though it seems: `\"https://groups.inf.ed.ac.uk/ami/AMICorpusMirror//amicorpus/{}/audio/{}\"`\r\n\r\nso not 100p sure if this solves the problem", "Hi @patrickvonplaten,\r\n\r\nThanks for the feedback! Sadly, disabling multi-processing didn't cut it for me. \r\n\r\nI've been looking at VCTK code in [`torchaudio`](https://pytorch.org/audio/stable/_modules/torchaudio/datasets/vctk.html) and [`tfds`](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/audio/vctk.py). I don't think they're using a hack to accomplish this, so I'll try to look into it to see if I can pinpoint the cause. I'll keep you in the loop here. Thank you!", "Hi @patrickvonplaten, \r\n\r\nAfter more investigation, I found that simply increasing `etag_timeout` in `get_from_cache` from 10 to 100 solved it. However, unless I'm missing something, an issue is that `etag_timeout` is basically hard-coded as a default parameter because `cached_path`, which calls `get_from_cache` has no way of modifying the default. \r\n\r\nhttps://github.com/huggingface/datasets/blob/b25ac1d62670e7b339ed552ecc37846d2abd30c7/src/datasets/utils/file_utils.py#L298-L310\r\n\r\nhttps://github.com/huggingface/datasets/blob/b25ac1d62670e7b339ed552ecc37846d2abd30c7/src/datasets/utils/file_utils.py#L497-L510\r\n\r\n\r\nI can think of two solutions.\r\n\r\n* Simply increase the default to 100\r\n* Allow `etag_timeout` to be modifiable on a per-dataset basis by integrating it to `download_config` (maybe this is already supported?)\r\n\r\nThank you!", "I think in this case we can increase the `etag_timeout` - what do you think @lhoestq @albertvillanova ?", "Yes let's increase it to 100 for the moment. Later we can see if it really needed to move it into `download_config` or not", "Thanks for the feedback @patrickvonplaten @lhoestq, I'll continue working on this in that direction!", "Hello @patrickvonplaten, VCTK is ready for review! \r\n\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> ds = load_dataset(\"vctk\")\r\nUsing the latest cached version of the module from /home/lily/jt856/.cache/huggingface/modules/datasets_modules/datasets/vctk/b7aa278182de3a7aa2897cbd12c1e19f1af9840a2ead69a6d710fdbc1d2df02a (last modified on Sat Dec 25 00:47:31 2021) since it couldn't be found locally at vctk., or remotely on the Hugging Face Hub.\r\nReusing dataset vctk (/home/lily/jt856/.cache/huggingface/datasets/vctk/main/0.9.2/b7aa278182de3a7aa2897cbd12c1e19f1af9840a2ead69a6d710fdbc1d2df02a)\r\n100%|████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 198.35it/s]\r\n>>> len(ds[\"train\"])\r\n88156\r\n>>> ds[\"train\"][0]\r\n{'speaker_id': 'p225', 'audio': {'path': '/home/lily/jt856/.cache/huggingface/datasets/downloads/extracted/8ed7dad05dfffdb552a3699777442af8e8ed11e656feb277f35bf9aea448f49e/wav48_silence_trimmed/p225/p225_001_mic1.flac', 'array': array([0.00485229, 0.00689697, 0.00619507, ..., 0.00811768, 0.00836182,\r\n 0.00854492], dtype=float32), 'sampling_rate': 48000}, 'file': '/home/lily/jt856/.cache/huggingface/datasets/downloads/extracted/8ed7dad05dfffdb552a3699777442af8e8ed11e656feb277f35bf9aea448f49e/wav48_silence_trimmed/p225/p225_001_mic1.flac', 'text': 'Please call Stella.', 'text_id': '001', 'age': '23', 'gender': 'F', 'accent': 'English', 'region': 'Southern England', 'comment': ''}\r\n```\r\nA number of tests are failing on CircleCI, but from my limited knowledge they appear to be complaining about `conda` and `pip`/`wheel`-related incompatibilities. But if I'm reading them wrong and it's an issue with this PR, please let me know and I'll try to fix them.\r\n\r\nBelated merry Christmas and a happy new year!" ]
1,638,346,397,000
1,640,704,320,000
1,640,703,908,000
CONTRIBUTOR
null
Fixes #1837.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3351/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3351/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3351", "html_url": "https://github.com/huggingface/datasets/pull/3351", "diff_url": "https://github.com/huggingface/datasets/pull/3351.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3351.patch", "merged_at": 1640703907000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3350
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3350/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3350/comments
https://api.github.com/repos/huggingface/datasets/issues/3350/events
https://github.com/huggingface/datasets/pull/3350
1,068,078,160
PR_kwDODunzps4vO1aj
3,350
Avoid content-encoding issue while streaming datasets
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,638,345,408,000
1,638,346,501,000
1,638,346,500,000
MEMBER
null
This PR will fix streaming of datasets served with gzip content-encoding: ``` ClientPayloadError: 400, message='Can not decode content-encoding: gzip' ``` Fix #2918. CC: @severo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3350/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3350/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3350", "html_url": "https://github.com/huggingface/datasets/pull/3350", "diff_url": "https://github.com/huggingface/datasets/pull/3350.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3350.patch", "merged_at": 1638346500000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3349
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3349/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3349/comments
https://api.github.com/repos/huggingface/datasets/issues/3349/events
https://github.com/huggingface/datasets/pull/3349
1,067,853,601
PR_kwDODunzps4vOF-s
3,349
raise exception instead of using assertions.
{ "login": "manisnesan", "id": 153142, "node_id": "MDQ6VXNlcjE1MzE0Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/153142?v=4", "gravatar_id": "", "url": "https://api.github.com/users/manisnesan", "html_url": "https://github.com/manisnesan", "followers_url": "https://api.github.com/users/manisnesan/followers", "following_url": "https://api.github.com/users/manisnesan/following{/other_user}", "gists_url": "https://api.github.com/users/manisnesan/gists{/gist_id}", "starred_url": "https://api.github.com/users/manisnesan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manisnesan/subscriptions", "organizations_url": "https://api.github.com/users/manisnesan/orgs", "repos_url": "https://api.github.com/users/manisnesan/repos", "events_url": "https://api.github.com/users/manisnesan/events{/privacy}", "received_events_url": "https://api.github.com/users/manisnesan/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "@mariosasko - Thanks for the review & suggestions. Updated as per the suggestions. ", "@mariosasko - Hello, Are there any additional changes required from my end??. Wondering if this PR can be merged or still pending on additional steps.", "@mariosasko - The approved changes in the PR now has conflicts with the master branch. Would you like me to resolve the conflicts??. Let me know. ", "@mariosasko @lhoestq - Gentle reminder about my previous question. ", "Hi ! Thanks for the heads up :)\r\nI just resolved the conflicts, it should be alright now", "Merging, thanks for the help @manisnesan !" ]
1,638,322,671,000
1,640,016,447,000
1,640,016,447,000
CONTRIBUTOR
null
fix for the remaining files https://github.com/huggingface/datasets/issues/3171
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3349/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3349/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3349", "html_url": "https://github.com/huggingface/datasets/pull/3349", "diff_url": "https://github.com/huggingface/datasets/pull/3349.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3349.patch", "merged_at": 1640016447000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3348
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3348/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3348/comments
https://api.github.com/repos/huggingface/datasets/issues/3348/events
https://github.com/huggingface/datasets/pull/3348
1,067,831,113
PR_kwDODunzps4vOBOQ
3,348
BLEURT: Match key names to correspond with filename
{ "login": "jaehlee", "id": 11873078, "node_id": "MDQ6VXNlcjExODczMDc4", "avatar_url": "https://avatars.githubusercontent.com/u/11873078?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jaehlee", "html_url": "https://github.com/jaehlee", "followers_url": "https://api.github.com/users/jaehlee/followers", "following_url": "https://api.github.com/users/jaehlee/following{/other_user}", "gists_url": "https://api.github.com/users/jaehlee/gists{/gist_id}", "starred_url": "https://api.github.com/users/jaehlee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jaehlee/subscriptions", "organizations_url": "https://api.github.com/users/jaehlee/orgs", "repos_url": "https://api.github.com/users/jaehlee/repos", "events_url": "https://api.github.com/users/jaehlee/events{/privacy}", "received_events_url": "https://api.github.com/users/jaehlee/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Thanks for the suggestion! I think the current checked-in `CHECKPOINT_URLS` is already not working. I believe anyone who tried using the new ckpts (`BLEURT-20-X`) can't unless this fix is in. The zip file from bleurt side unzips to directory name matching the filename (capitalized for new ones). For example without current changes I'd get the following error\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nAssertionError Traceback (most recent call last)\r\n<ipython-input-5-f6832fe20f84> in <module>()\r\n 1 predictions = [\"hello there\", \"general kenobi\"]\r\n 2 references = [\"hello there\", \"general kenobi\"]\r\n----> 3 bleurt = datasets.load_metric(\"bleurt\", \"bleurt-20\")\r\n 4 results = bleurt.compute(predictions=predictions, references=references)\r\n\r\n4 frames\r\n/usr/local/lib/python3.7/dist-packages/bleurt/checkpoint.py in read_bleurt_config(path)\r\n 84 \"\"\"Reads and checks config file from a BLEURT checkpoint.\"\"\"\r\n 85 assert tf.io.gfile.exists(path), \\\r\n---> 86 \"Could not find BLEURT checkpoint {}\".format(path)\r\n 87 config_path = os.path.join(path, CONFIG_FILE)\r\n 88 assert tf.io.gfile.exists(config_path), \\\r\n\r\nAssertionError: Could not find BLEURT checkpoint /root/.cache/huggingface/metrics/bleurt/bleurt-20/downloads/extracted/e34c60f1a05394ecda54e253a10413ca7b5d59f9a23f3cc73258c6b78ffa2f50/bleurt-20\r\n```\r\ninspecting specified path I see that directory name is `BLEURT-20` instead of `bleurt-20`. \r\nOther solution similar to your suggestion is meddle with `dl_manager.download_and_extract` to unzip to paths with lowering all the paths but I imagine this will affect other parts of the library. ", "Indeed, good catch ! Your solution that fixes `CHECKPOINT_URLS ` is simple and works well, thanks :)\r\n\r\nFurthermore to avoid breaking changes though we could also keep the support for the lowercase one:\r\n```python\r\n if self.config_name.lower() in CHECKPOINT_URLS:\r\n checkpoint_name = self.config_name.lower()\r\n elif self.config_name.upper() in CHECKPOINT_URLS:\r\n checkpoint_name = self.config_name.upper()\r\n else:\r\n raise KeyError(\r\n f\"{self.config_name} model not found. You should supply the name of a model checkpoint for bleurt in {CHECKPOINT_URLS.keys()}\"\r\n )\r\n```\r\nand then we can use `checkpoint_name` instead of `self.config_name` to download and instantiate the model:\r\n```python\r\n model_path = dl_manager.download_and_extract(CHECKPOINT_URLS[checkpoint_name])\r\n self.scorer = score.BleurtScorer(os.path.join(model_path, checkpoint_name))\r\n```\r\n\r\nPlease let me know if that sounds reasonable to you !", "Thanks for the suggestion! I believe your suggestion should work to make keys case insensitive. Changes are committed to the PR now. " ]
1,638,320,478,000
1,638,893,217,000
1,638,893,217,000
CONTRIBUTOR
null
In order to properly locate downloaded ckpt files key name needs to match filename. Correcting change introduced in #3235
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3348/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3348/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3348", "html_url": "https://github.com/huggingface/datasets/pull/3348", "diff_url": "https://github.com/huggingface/datasets/pull/3348.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3348.patch", "merged_at": 1638893217000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3347
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3347/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3347/comments
https://api.github.com/repos/huggingface/datasets/issues/3347/events
https://github.com/huggingface/datasets/pull/3347
1,067,738,902
PR_kwDODunzps4vNthw
3,347
iter_archive for zip files
{ "login": "Mehdi2402", "id": 56029953, "node_id": "MDQ6VXNlcjU2MDI5OTUz", "avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mehdi2402", "html_url": "https://github.com/Mehdi2402", "followers_url": "https://api.github.com/users/Mehdi2402/followers", "following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}", "gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions", "organizations_url": "https://api.github.com/users/Mehdi2402/orgs", "repos_url": "https://api.github.com/users/Mehdi2402/repos", "events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}", "received_events_url": "https://api.github.com/users/Mehdi2402/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "And also don't always try streaming with Google Drive - it can have issues because of how Google Drive works (with quotas, restrictions, etc.) and it can indeed cause `BlockSizeError`.\r\n\r\nFeel free to host your test data elsewhere, such as in a dataset repository on https://huggingface.co (see [here](https://huggingface.co/docs/datasets/upload_dataset.html#upload-your-files) for a tutorial on how to upload files)" ]
1,638,311,657,000
1,638,577,342,000
1,638,577,331,000
CONTRIBUTOR
null
* In this PR, I added the option to iterate through zipfiles for `download_manager.py` only. * Next PR will be the same applied to `streaming_download_manager.py`. * Related issue #3272. ## Comments : * There is no `.isreg()` equivalent in zipfile library to check if file is Regular so I used `.is_dir()` instead to skip directories. * For now I got `streaming_download_manager.py` working for local zip files, but not for urls. I get the following error when I test it on an archive in google drive, so still working on it. `BlockSizeError: Got more bytes so far (>2112) than requested (22)` ## Tasks : - [x] download_manager.py - [ ] streaming_download_manager.py
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3347/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3347/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3347", "html_url": "https://github.com/huggingface/datasets/pull/3347", "diff_url": "https://github.com/huggingface/datasets/pull/3347.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3347.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3346
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3346/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3346/comments
https://api.github.com/repos/huggingface/datasets/issues/3346/events
https://github.com/huggingface/datasets/issues/3346
1,067,632,365
I_kwDODunzps4_osbt
3,346
Failed to convert `string` with pyarrow for QED since 1.15.0
{ "login": "tianjianjiang", "id": 4812544, "node_id": "MDQ6VXNlcjQ4MTI1NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4812544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tianjianjiang", "html_url": "https://github.com/tianjianjiang", "followers_url": "https://api.github.com/users/tianjianjiang/followers", "following_url": "https://api.github.com/users/tianjianjiang/following{/other_user}", "gists_url": "https://api.github.com/users/tianjianjiang/gists{/gist_id}", "starred_url": "https://api.github.com/users/tianjianjiang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tianjianjiang/subscriptions", "organizations_url": "https://api.github.com/users/tianjianjiang/orgs", "repos_url": "https://api.github.com/users/tianjianjiang/repos", "events_url": "https://api.github.com/users/tianjianjiang/events{/privacy}", "received_events_url": "https://api.github.com/users/tianjianjiang/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Scratch that, probably the old and incompatible usage of dataset builder from promptsource.", "Actually, re-opening this issue cause the error persists\r\n\r\n```python\r\n>>> load_dataset(\"qed\")\r\nDownloading and preparing dataset qed/qed (download: 13.43 MiB, generated: 9.70 MiB, post-processed: Unknown size, total: 23.14 MiB) to /home/victor_huggingface_co/.cache/huggingface/datasets/qed/qed/1.0.0/47d8b6f033393aa520a8402d4baf2d6bdc1b2fbde3dc156e595d2ef34caf7d75...\r\n100%|███████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2228.64it/s]\r\nTraceback (most recent call last): \r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/load.py\", line 1669, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py\", line 594, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py\", line 681, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py\", line 1083, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/arrow_writer.py\", line 468, in finalize\r\n self.write_examples_on_file()\r\n File \"/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/arrow_writer.py\", line 339, in write_examples_on_file\r\n pa_array = pa.array(typed_sequence)\r\n File \"pyarrow/array.pxi\", line 229, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/arrow_writer.py\", line 125, in __arrow_array__\r\n out = pa.array(cast_to_python_objects(self.data, only_1d_for_numpy=True), type=type)\r\n File \"pyarrow/array.pxi\", line 315, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow/error.pxi\", line 143, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Could not convert 'in' with type str: tried to convert to boolean\r\n```\r\n\r\nEnvironment (datasets and pyarrow):\r\n\r\n```bash\r\n(promptsource) victor_huggingface_co@victor-dev:~/promptsource$ datasets-cli env\r\n\r\nCopy-and-paste the text below in your GitHub issue.\r\n\r\n- `datasets` version: 1.16.1\r\n- Platform: Linux-5.0.0-1020-gcp-x86_64-with-debian-buster-sid\r\n- Python version: 3.7.11\r\n- PyArrow version: 6.0.1\r\n```\r\n```bash\r\n(promptsource) victor_huggingface_co@victor-dev:~/promptsource$ pip show pyarrow\r\nName: pyarrow\r\nVersion: 6.0.1\r\nSummary: Python library for Apache Arrow\r\nHome-page: https://arrow.apache.org/\r\nAuthor: \r\nAuthor-email: \r\nLicense: Apache License, Version 2.0\r\nLocation: /home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages\r\nRequires: numpy\r\nRequired-by: streamlit, datasets\r\n```" ]
1,638,303,102,000
1,639,492,745,000
1,639,492,745,000
CONTRIBUTOR
null
## Describe the bug Loading QED was fine until 1.15.0. related: bigscience-workshop/promptsource#659, bigscience-workshop/promptsource#670 Not sure where the root cause is, but here are some candidates: - #3158 - #3120 - #3196 - #2891 ## Steps to reproduce the bug ```python load_dataset("qed") ``` ## Expected results Loading completed. ## Actual results ```shell ArrowInvalid: Could not convert in with type str: tried to convert to boolean Traceback: File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/streamlit/script_runner.py", line 354, in _run_script exec(code, module.__dict__) File "/Users/s0s0cr3/Documents/GitHub/promptsource/promptsource/app.py", line 260, in <module> dataset = get_dataset(dataset_key, str(conf_option.name) if conf_option else None) File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/streamlit/caching.py", line 543, in wrapped_func return get_or_create_cached_value() File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/streamlit/caching.py", line 527, in get_or_create_cached_value return_value = func(*args, **kwargs) File "/Users/s0s0cr3/Documents/GitHub/promptsource/promptsource/utils.py", line 49, in get_dataset builder_instance.download_and_prepare() File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/builder.py", line 697, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/builder.py", line 1106, in _prepare_split num_examples, num_bytes = writer.finalize() File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/arrow_writer.py", line 456, in finalize self.write_examples_on_file() File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/arrow_writer.py", line 325, in write_examples_on_file pa_array = pa.array(typed_sequence) File "pyarrow/array.pxi", line 222, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/arrow_writer.py", line 121, in __arrow_array__ out = pa.array(cast_to_python_objects(self.data, only_1d_for_numpy=True), type=type) File "pyarrow/array.pxi", line 305, in pyarrow.lib.array File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.0, 1.16.1 - Platform: macOS 1.15.7 or above - Python version: 3.7.12 and 3.9 - PyArrow version: 3.0.0, 5.0.0, 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3346/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3346/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3345
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3345/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3345/comments
https://api.github.com/repos/huggingface/datasets/issues/3345/events
https://github.com/huggingface/datasets/issues/3345
1,067,622,951
I_kwDODunzps4_oqIn
3,345
Failed to download species_800 from Google Drive zip file
{ "login": "tianjianjiang", "id": 4812544, "node_id": "MDQ6VXNlcjQ4MTI1NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4812544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tianjianjiang", "html_url": "https://github.com/tianjianjiang", "followers_url": "https://api.github.com/users/tianjianjiang/followers", "following_url": "https://api.github.com/users/tianjianjiang/following{/other_user}", "gists_url": "https://api.github.com/users/tianjianjiang/gists{/gist_id}", "starred_url": "https://api.github.com/users/tianjianjiang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tianjianjiang/subscriptions", "organizations_url": "https://api.github.com/users/tianjianjiang/orgs", "repos_url": "https://api.github.com/users/tianjianjiang/repos", "events_url": "https://api.github.com/users/tianjianjiang/events{/privacy}", "received_events_url": "https://api.github.com/users/tianjianjiang/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hi,\r\n\r\nthe dataset is downloaded normally on my machine. Maybe the URL was down at the time of your download. Could you try again?", "> Hi,\r\n> \r\n> the dataset is downloaded normally on my machine. Maybe the URL was down at the time of your download. Could you try again?\r\n\r\nI have tried that many times with both load_dataset() and a browser almost simultaneously. The browser always works for me while load_dataset() fails.", "@mariosasko \r\n> the dataset is downloaded normally on my machine. Maybe the URL was down at the time of your download. Could you try again?\r\n\r\nI've tried yet again just a moment ago. This time I realize that, the step `(... post-processed: Unknown size, total: 20.89 MiB) to /Users/mike/.cache/huggingface/datasets/species800/species_800/1.0.0/532167f0bb8fbc0d77d6d03c4fd642c8c55527b9c5f2b1da77f3d00b0e559976...` and the one after seem unstable. If I want to retry, I will have to delete it (and probably other cache lock files). It **_sometimes_** works.\r\n\r\nBut I didn't try `download_mode=\"force_redownload\"` yet.\r\n\r\nAnyway, I suppose this isn't really a pressing issue for the time being, so I'm going to close this. Thank you.\r\n\r\n" ]
1,638,302,428,000
1,638,381,195,000
1,638,381,195,000
CONTRIBUTOR
null
## Describe the bug One can manually download the zip file on Google Drive, but `load_dataset()` cannot. related: #3248 ## Steps to reproduce the bug ```shell > python Python 3.7.12 (default, Sep 5 2021, 08:34:29) [Clang 11.0.3 (clang-1103.0.32.62)] on darwin Type "help", "copyright", "credits" or "license" for more information. ``` ```python >>> from datasets import load_dataset >>> s800 = load_dataset("species_800") ``` ## Expected results species_800 downloaded. ## Actual results ```shell Downloading: 5.68kB [00:00, 1.22MB/s] Downloading: 2.70kB [00:00, 691kB/s] Downloading and preparing dataset species800/species_800 (download: 17.36 MiB, generated: 3.53 MiB, post-processed: Unknown size, total: 20.89 MiB) to /Users/mike/.cache/huggingface/datasets/species800/species_800/1.0.0/532167f0bb8fbc0d77d6d03c4fd642c8c55527b9c5f2b1da77f3d00b0e559976... 0%| | 0/1 [00:00<?, ?it/s]Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/load.py", line 1632, in load_dataset use_auth_token=use_auth_token, File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 608, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 675, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/Users/mike/.cache/huggingface/modules/datasets_modules/datasets/species_800/532167f0bb8fbc0d77d6d03c4fd642c8c55527b9c5f2b1da77f3d00b0e559976/species_800.py", line 104, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 197, in download download_func, url_or_urls, map_tuple=True, num_proc=download_config.num_proc, disable_tqdm=False File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 209, in map_nested for obj in utils.tqdm(iterable, disable=disable_tqdm) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 209, in <listcomp> for obj in utils.tqdm(iterable, disable=disable_tqdm) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 143, in _single_map_nested return function(data_struct) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 305, in cached_path use_auth_token=download_config.use_auth_token, File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://drive.google.com/u/0/uc?id=1OletxmPYNkz2ltOr9pyT0b0iBtUWxslh&export=download/ ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.14,0 1.15.0, 1.16.1 - Platform: macOS Catalina 10.15.7 - Python version: 3.7.12 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3345/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3345/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3344
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3344/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3344/comments
https://api.github.com/repos/huggingface/datasets/issues/3344/events
https://github.com/huggingface/datasets/pull/3344
1,067,567,603
PR_kwDODunzps4vNJwd
3,344
Add ArrayXD docs
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,638,298,411,000
1,638,389,763,000
1,638,387,332,000
CONTRIBUTOR
null
Documents support for dynamic first dimension in `ArrayXD` from #2891, and explain the `ArrayXD` feature in general. Let me know if I'm missing anything @lhoestq :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3344/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3344/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3344", "html_url": "https://github.com/huggingface/datasets/pull/3344", "diff_url": "https://github.com/huggingface/datasets/pull/3344.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3344.patch", "merged_at": 1638387332000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3343
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3343/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3343/comments
https://api.github.com/repos/huggingface/datasets/issues/3343/events
https://github.com/huggingface/datasets/pull/3343
1,067,505,507
PR_kwDODunzps4vM8yB
3,343
Better error message when download fails
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,638,293,930,000
1,638,358,079,000
1,638,358,078,000
MEMBER
null
From our discussions in https://github.com/huggingface/datasets/issues/3269 and https://github.com/huggingface/datasets/issues/3282 it would be nice to have better messages if a download fails. In particular the error now shows: - the error from the HEAD request if there's one - otherwise the response code of the HEAD request I also added an error to tell users to pass `use_auth_token` when the Hugging Face Hub returns 401 (Unauthorized). While paying around with this I also fixed a minor issue with the `force_download` parameter that was not always taken into account
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3343/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3343", "html_url": "https://github.com/huggingface/datasets/pull/3343", "diff_url": "https://github.com/huggingface/datasets/pull/3343.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3343.patch", "merged_at": 1638358078000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3342
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3342/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3342/comments
https://api.github.com/repos/huggingface/datasets/issues/3342/events
https://github.com/huggingface/datasets/pull/3342
1,067,481,390
PR_kwDODunzps4vM3wh
3,342
Fix ASSET dataset data URLs
{ "login": "tianjianjiang", "id": 4812544, "node_id": "MDQ6VXNlcjQ4MTI1NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4812544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tianjianjiang", "html_url": "https://github.com/tianjianjiang", "followers_url": "https://api.github.com/users/tianjianjiang/followers", "following_url": "https://api.github.com/users/tianjianjiang/following{/other_user}", "gists_url": "https://api.github.com/users/tianjianjiang/gists{/gist_id}", "starred_url": "https://api.github.com/users/tianjianjiang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tianjianjiang/subscriptions", "organizations_url": "https://api.github.com/users/tianjianjiang/orgs", "repos_url": "https://api.github.com/users/tianjianjiang/repos", "events_url": "https://api.github.com/users/tianjianjiang/events{/privacy}", "received_events_url": "https://api.github.com/users/tianjianjiang/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "> Hi @tianjianjiang, thanks for the fix.\r\n> The links should also be updated in the `dataset_infos.json` file.\r\n> The failing tests are due to the missing tag in the header of the `README.md` file:\r\n\r\nHi @albertvillanova, thank you for the info! My apologies for the messy PR.\r\n" ]
1,638,292,410,000
1,639,493,400,000
1,639,493,400,000
CONTRIBUTOR
null
Change the branch name "master" to "main" in the data URLs, since facebookresearch has changed that.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3342/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3342/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3342", "html_url": "https://github.com/huggingface/datasets/pull/3342", "diff_url": "https://github.com/huggingface/datasets/pull/3342.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3342.patch", "merged_at": 1639493400000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3341
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3341/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3341/comments
https://api.github.com/repos/huggingface/datasets/issues/3341/events
https://github.com/huggingface/datasets/issues/3341
1,067,449,569
I_kwDODunzps4_n_zh
3,341
Mirror the canonical datasets to the Hugging Face Hub
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "I created a GitHub project to keep track of what needs to be done:\r\nhttps://github.com/huggingface/datasets/projects/3\r\n\r\nI also store my code in a (private for now) repository at https://github.com/huggingface/mirror_canonical_datasets_on_hub", "I understand that the datasets are mirrored on the Hub now, right? Might I close @lhoestq @SBrandeis?" ]
1,638,290,525,000
1,643,208,457,000
1,643,208,457,000
CONTRIBUTOR
null
- [ ] create a repo on https://hf.co/datasets for every canonical dataset - [ ] on every commit related to a dataset, update the hf.co repo See https://github.com/huggingface/moon-landing/pull/1562 @SBrandeis: I let you edit this description if needed to precise the intent.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3341/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3341/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3340
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3340/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3340/comments
https://api.github.com/repos/huggingface/datasets/issues/3340/events
https://github.com/huggingface/datasets/pull/3340
1,067,292,636
PR_kwDODunzps4vMP6Z
3,340
Fix JSON ClassLabel casting for integers
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,638,281,994,000
1,638,358,050,000
1,638,358,050,000
MEMBER
null
Loading a JSON dataset with ClassLabel feature types currently fails if the JSON data already has integers. Indeed currently it tries to convert the strings to integers without even checking if the data are not integers already. For example this currently fails: ```python from datasets import load_dataset, Features, ClassLabel path = "data.json" f = Features({"a": ClassLabel(names=["neg", "pos"])}) d = load_dataset("json", data_files=path, features=f) ``` data.json ```json {"a": 0} {"a": 1} ``` I fixed that by adding a line that checks the type of the JSON data before trying to convert them cc @albertvillanova let me know if it sounds good to you
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3340/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3340/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3340", "html_url": "https://github.com/huggingface/datasets/pull/3340", "diff_url": "https://github.com/huggingface/datasets/pull/3340.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3340.patch", "merged_at": 1638358050000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3339
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3339/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3339/comments
https://api.github.com/repos/huggingface/datasets/issues/3339/events
https://github.com/huggingface/datasets/issues/3339
1,066,662,477
I_kwDODunzps4_k_pN
3,339
to_tf_dataset fails on TPU
{ "login": "nbroad1881", "id": 24982805, "node_id": "MDQ6VXNlcjI0OTgyODA1", "avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nbroad1881", "html_url": "https://github.com/nbroad1881", "followers_url": "https://api.github.com/users/nbroad1881/followers", "following_url": "https://api.github.com/users/nbroad1881/following{/other_user}", "gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}", "starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions", "organizations_url": "https://api.github.com/users/nbroad1881/orgs", "repos_url": "https://api.github.com/users/nbroad1881/repos", "events_url": "https://api.github.com/users/nbroad1881/events{/privacy}", "received_events_url": "https://api.github.com/users/nbroad1881/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "This might be related to https://github.com/tensorflow/tensorflow/issues/38762 , what do you think @Rocketknight1 ?\r\n> Dataset.from_generator is expected to not work with TPUs as it uses py_function underneath which is incompatible with Cloud TPU 2VM setup. If you would like to read from large datasets, maybe try to materialize it on disk and use TFRecordDataest instead.", "Hi @lhoestq @nbroad1881, I think it's very similar, yes. Unfortunately `to_tf_dataset` uses `tf.numpy_function` which can't be compiled - this is a necessary evil to load from the underlying Arrow dataset. We need to update the notebooks/examples to clarify that this won't work, or to identify a workaround. You may be able to get it to work on an actual cloud TPU VM, but those are quite new and we haven't tested it yet. ", "Thank you for the explanation. I didn't realize the nuances of `tf.numpy_function`. In this scenario, would it be better to use `export(format='tfrecord')` ? It's not quite the same, but for very large datasets that don't fit in memory it looks like it is the only option. I haven't used `export` before, but I do recall reading that there are suggestions for how big and how many tfrecords there should be to not bottleneck the TPU. It might be nice if there were a way for the `export` method to split the files up into appropriate chunk sizes depending on the size of the dataset and the number of devices. And if that is too much, it would be nice to be able to specify the number of files that would be created when using `export`. Well... maybe the user should just do the chunking themselves and call `export` a bunch of times. Whatever the case, you have been helpful. Thanks Tensorflow boy ;-) ", "Yeah, this is something we really should have a proper guide on. I'll make a note to test some things and make a 'TF TPU best practices' notebook at some point, but in the meantime I think your solution of exporting TFRecords will probably work. ", "Also: I knew that tweet would haunt me" ]
1,638,233,452,000
1,638,454,887,000
null
NONE
null
Using `to_tf_dataset` to create a dataset and then putting it in `model.fit` results in an internal error on TPUs. I've only tried on Colab and Kaggle TPUs, not GCP TPUs. ## Steps to reproduce the bug I made a colab to show the error. https://colab.research.google.com/drive/12x_PFKzGouFxqD4OuWfnycW_1TaT276z?usp=sharing ## Expected results dataset from `to_tf_dataset` works in `model.fit` Right below the first error in the colab I use `tf.data.Dataset.from_tensor_slices` and `model.fit` works just fine. This is the desired outcome. ## Actual results ``` InternalError: 5 root error(s) found. (0) INTERNAL: {{function_node __inference_train_function_30558}} failed to connect to all addresses Additional GRPC error information from remote target /job:localhost/replica:0/task:0/device:CPU:0: :{"created":"@1638231897.932218653","description":"Failed to pick subchannel","file":"third_party/grpc/src/core/ext/filters/client_channel/client_channel.cc","file_line":3151,"referenced_errors":[{"created":"@1638231897.932216754","description":"failed to connect to all addresses","file":"third_party/grpc/src/core/lib/transport/error_utils.cc","file_line":161,"grpc_status":14}]} [[{{node StatefulPartitionedCall}}]] [[MultiDeviceIteratorGetNextFromShard]] Executing non-communication op <MultiDeviceIteratorGetNextFromShard> originally returned UnavailableError, and was replaced by InternalError to avoid invoking TF network error handling logic. [[RemoteCall]] [[IteratorGetNextAsOptional]] [[tpu_compile_succeeded_assert/_14023832043698465348/_7/_439]] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0 - Tensorflow 2.7.0 - `transformers` 4.12.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3339/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3339/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3338
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3338/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3338/comments
https://api.github.com/repos/huggingface/datasets/issues/3338/events
https://github.com/huggingface/datasets/pull/3338
1,066,371,235
PR_kwDODunzps4vJRFM
3,338
[WIP] Add doctests for tutorials
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "I manage to remove the mentions of ellipsis in the code by launching the command as follows:\r\n\r\n```\r\npython -m doctest -v docs/source/load_hub.rst -o=ELLIPSIS\r\n```\r\n\r\nThe way you put your ellipsis will only work on mac, I've adapted it for linux as well with the following:\r\n\r\n```diff\r\n >>> from datasets import load_dataset_builder\r\n >>> dataset_builder = load_dataset_builder('imdb')\r\n- >>> print(dataset_builder.cache_dir) #doctest: +ELLIPSIS\r\n- /Users/.../.cache/huggingface/datasets/imdb/plain_text/1.0.0/...\r\n+ >>> print(dataset_builder.cache_dir)\r\n+ /.../.cache/huggingface/datasets/imdb/plain_text/1.0.0/...\r\n```\r\n\r\nThis passes on my machine:\r\n\r\n```\r\nTrying:\r\n print(dataset_builder.cache_dir)\r\nExpecting:\r\n /.../.cache/huggingface/datasets/imdb/plain_text/1.0.0/...\r\nok\r\n```\r\n\r\nI'm getting a last error:\r\n\r\n```py\r\nExpected:\r\n DatasetDict({\r\n train: Dataset({\r\n features: ['sentence1', 'sentence2', 'label', 'idx'],\r\n num_rows: 3668\r\n })\r\n validation: Dataset({\r\n features: ['sentence1', 'sentence2', 'label', 'idx'],\r\n num_rows: 408\r\n })\r\n test: Dataset({\r\n features: ['sentence1', 'sentence2', 'label', 'idx'],\r\n num_rows: 1725\r\n })\r\n })\r\nGot:\r\n DatasetDict({\r\n train: Dataset({\r\n features: ['idx', 'label', 'sentence1', 'sentence2'],\r\n num_rows: 3668\r\n })\r\n validation: Dataset({\r\n features: ['idx', 'label', 'sentence1', 'sentence2'],\r\n num_rows: 408\r\n })\r\n test: Dataset({\r\n features: ['idx', 'label', 'sentence1', 'sentence2'],\r\n num_rows: 1725\r\n })\r\n })\r\n```\r\n\r\nBut this is due to `doctest` looking for an exact match and the list having an unordered print order. I wish `doctest` would be a bit more flexible with that." ]
1,638,211,246,000
1,641,497,763,000
null
CONTRIBUTOR
null
Opening a PR as discussed with @LysandreJik for some help with doctest issues. The goal is to add doctests for each of the tutorials in the documentation to make sure the code samples work as shown. ### Issues A doctest has been added in the docstring of the `load_dataset_builder` function in `load.py` to handle variable outputs with the `ELLIPSIS` directive. When I run doctest on the `load_hub.rst` file, doctest should recognize the expected output from the docstring, and the corresponding code sample in `load_hub.rst` should pass. I am having the same issue with handling tracebacks in the `load_dataset` function. From the docstring: ``` >>> dataset_builder.cache_dir #doctest: +ELLIPSIS /Users/.../.cache/huggingface/datasets/imdb/plain_text/1.0.0/... ``` Test result: ``` Failed example: dataset_builder.cache_dir Expected: /Users/.../.cache/huggingface/datasets/imdb/plain_text/1.0.0/... Got: /Users/steven/.cache/huggingface/datasets/imdb/plain_text/1.0.0/2fdd8b9bcadd6e7055e742a706876ba43f19faee861df134affd7a3f60fc38a1 ``` I am able to get the doctest to pass by adding the doctest directives (`ELLIPSIS` and `NORMALIZE_WHITESPACE`) to the code samples in the `rst` file directly. But my understanding is that these directives should also work in the docstrings of the functions. I am running the test from the root of the directory: ``` python -m doctest -v docs/source/load_hub.rst ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3338/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3338/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3338", "html_url": "https://github.com/huggingface/datasets/pull/3338", "diff_url": "https://github.com/huggingface/datasets/pull/3338.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3338.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3337
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3337/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3337/comments
https://api.github.com/repos/huggingface/datasets/issues/3337/events
https://github.com/huggingface/datasets/issues/3337
1,066,232,936
I_kwDODunzps4_jWxo
3,337
Typing of Dataset.__getitem__ could be improved.
{ "login": "Dref360", "id": 8976546, "node_id": "MDQ6VXNlcjg5NzY1NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dref360", "html_url": "https://github.com/Dref360", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "organizations_url": "https://api.github.com/users/Dref360/orgs", "repos_url": "https://api.github.com/users/Dref360/repos", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "received_events_url": "https://api.github.com/users/Dref360/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "Dref360", "id": 8976546, "node_id": "MDQ6VXNlcjg5NzY1NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dref360", "html_url": "https://github.com/Dref360", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "organizations_url": "https://api.github.com/users/Dref360/orgs", "repos_url": "https://api.github.com/users/Dref360/repos", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "received_events_url": "https://api.github.com/users/Dref360/received_events", "type": "User", "site_admin": false }
[ { "login": "Dref360", "id": 8976546, "node_id": "MDQ6VXNlcjg5NzY1NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dref360", "html_url": "https://github.com/Dref360", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "organizations_url": "https://api.github.com/users/Dref360/orgs", "repos_url": "https://api.github.com/users/Dref360/repos", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "received_events_url": "https://api.github.com/users/Dref360/received_events", "type": "User", "site_admin": false } ]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hi ! Thanks for the suggestion, I didn't know about this decorator.\r\n\r\nIf you are interesting in contributing, feel free to open a pull request to add the overload methods for each typing combination :) To assign you to this issue, you can comment `#self-assign` in this thread.\r\n\r\n`Dataset.__getitem__` is defined right here: https://github.com/huggingface/datasets/blob/e6f1352fe19679de897f3d962e616936a17094f5/src/datasets/arrow_dataset.py#L1840", "#self-assign" ]
1,638,202,811,000
1,639,477,734,000
1,639,477,734,000
CONTRIBUTOR
null
## Describe the bug The newly added typing for Dataset.__getitem__ is Union[Dict, List]. This makes tools like mypy a bit awkward to use as we need to check the type manually. We could use type overloading to make this easier. [Documentation](https://docs.python.org/3/library/typing.html#typing.overload) ## Steps to reproduce the bug Let's have a file `test.py` ```python from typing import List, Dict, Any from datasets import Dataset ds = Dataset.from_dict({ 'a': [1,2,3], 'b': ["1", "2", "3"] }) one_colum: List[str] = ds['a'] some_index: Dict[Any, Any] = ds[1] ``` ## Expected results Running `mypy test.py` should not give any error. ## Actual results ``` test.py:10: error: Incompatible types in assignment (expression has type "Union[Dict[Any, Any], List[Any]]", variable has type "List[str]") test.py:11: error: Incompatible types in assignment (expression has type "Union[Dict[Any, Any], List[Any]]", variable has type "Dict[Any, Any]") Found 2 errors in 1 file (checked 1 source file) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.13.3 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.8 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3337/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3337/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3336
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3336/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3336/comments
https://api.github.com/repos/huggingface/datasets/issues/3336/events
https://github.com/huggingface/datasets/pull/3336
1,066,208,436
PR_kwDODunzps4vIwUE
3,336
Add support for multiple dynamic dimensions and to_pandas conversion for dynamic arrays
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,638,201,539,000
1,638,201,539,000
null
CONTRIBUTOR
null
Add support for multiple dynamic dimensions (e.g. `(None, None, 3)` for arbitrary sized images) and `to_pandas()` conversion for dynamic arrays. TODOs: * [ ] Cleaner code * [ ] Formatting issues (if NumPy doesn't allow broadcasting even though dtype is np.object) * [ ] Fix some issues with zero-dim tensors * [ ] Tests
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3336/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3336/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3336", "html_url": "https://github.com/huggingface/datasets/pull/3336", "diff_url": "https://github.com/huggingface/datasets/pull/3336.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3336.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3335
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3335/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3335/comments
https://api.github.com/repos/huggingface/datasets/issues/3335/events
https://github.com/huggingface/datasets/pull/3335
1,066,064,126
PR_kwDODunzps4vISGy
3,335
add Speech commands dataset
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "@anton-l ping", "@lhoestq \r\nHi Quentin! Thank you for your feedback and suggestions! 🤗\r\n\r\nYes, that was actually what I wanted to do next - I mean the steaming stuff :)\r\nAlso, I need to make some changes to the readme (to account for the updated features set).\r\n\r\nHopefully, I will be done by tomorrow afternoon if that's ok. \r\n", "@lhoestq Hi Quentin!\r\n\r\nI've implemented (hopefully, correctly) the streaming compatibility but the problem with the current approach is that we first need to iterate over the full archive anyway to get the list of filenames for train and validation sets (see [this](https://github.com/huggingface/datasets/pull/3335/files#diff-aeea540d136025e30a842856779e9c6485a5dc6fc9eb7fd6d3be2acd2f49b8e3R186), the same approach is implemented in TFDS version). Only after that, we can generate examples, so we cannot stream the dataset before the first iteration ends and it takes some time. It's probably not the most effective way. \r\n\r\nIf the streaming mode is turned off, this approach (with two iterations) is actually slower than the previous implementation (with archive extraction). \r\n\r\nMy suggestion is to host separate archives for each split prepared in advance. That way there would be no need for iterating over the common archive to collect train and validation filenames. @anton-l suggested to make AWS mirrors for them. I've prepared these archives, for now you can take a look at them [here](https://drive.google.com/drive/folders/1oMrZHzPgHAKprKJuvih91CM8KMSzh_pL?usp=sharing). I simplified their structure a bit so if we switch to using them, the code then should be changed (and simplified) a bit too.\r\n", "Hi ! Thanks for the changes :)\r\n\r\n> My suggestion is to host separate archives for each split prepared in advance. That way there would be no need for iterating over the common archive to collect train and validation filenames. @anton-l suggested to make AWS mirrors for them. I've prepared these archives, for now you can take a look at them here. I simplified their structure a bit so if we switch to using them, the code then should be changed (and simplified) a bit too.\r\n\r\nI agree, I just uploaded them on AWS\r\n\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/SpeechCommands/v0.01/v0.01_test.tar.gz\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/SpeechCommands/v0.01/v0.01_train.tar.gz\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/SpeechCommands/v0.01/v0.01_validation.tar.gz\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/SpeechCommands/v0.02/v0.02_test.tar.gz\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/SpeechCommands/v0.02/v0.02_validation.tar.gz\r\n\r\nNote that in the future we can move those files to actual repositories on the Hugging Face Hub, since we are migrating the datasets from this repository to the Hugging Face Hub (as mirrors), to make them more accessible to the community.", "@lhoestq Thank you! Gonna look at this tomorrow :)", "@lhoestq I've modified the code to fit new data format, now it works for v0.01 but doesn't work for v0.02 as the training archive is missing. Could you please create a mirror for that one too? You can find it [here](https://drive.google.com/file/d/1mPjnVMYb-VhPprGlOX8v9TBT1GT-rtcp/view?usp=sharing)\r\n\r\nAnd when it's done I'll need to regenerate all the meta / dummy stuff, and this version will be ready for a review :)", "Here you go :)\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/SpeechCommands/v0.02/v0.02_train.tar.gz", "FYI I juste merged a fix for the Windows CI error on `master`, feel free to merge `master` again into your branch", "All green ! I had to fix some minor stuff in the CI but it's good now\r\n\r\nNext step is to mark it as ready for review, and I think it's all good so we can merge 🚀 ", "@lhoestq 🤗", ":tada: " ]
1,638,193,967,000
1,639,132,641,000
1,639,132,215,000
CONTRIBUTOR
null
closes #3283
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3335/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3335/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3335", "html_url": "https://github.com/huggingface/datasets/pull/3335", "diff_url": "https://github.com/huggingface/datasets/pull/3335.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3335.patch", "merged_at": 1639132215000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3334
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3334/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3334/comments
https://api.github.com/repos/huggingface/datasets/issues/3334/events
https://github.com/huggingface/datasets/issues/3334
1,065,983,923
I_kwDODunzps4_iZ-z
3,334
Integrate Polars library
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "If possible, a neat API could be something like `Dataset.to_polars()`, as well as `Dataset.set_format(\"polars\")`", "Note they use a \"custom\" implementation of Arrow: [Arrow2](https://github.com/jorgecarleitao/arrow2)." ]
1,638,189,114,000
1,638,190,872,000
null
MEMBER
null
Check potential integration of the Polars library: https://github.com/pola-rs/polars - Benchmark: https://h2oai.github.io/db-benchmark/ CC: @thomwolf @lewtun
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3334/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3334/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3333
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3333/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3333/comments
https://api.github.com/repos/huggingface/datasets/issues/3333/events
https://github.com/huggingface/datasets/issues/3333
1,065,346,919
I_kwDODunzps4_f-dn
3,333
load JSON files, get the errors
{ "login": "yanllearnn", "id": 38966558, "node_id": "MDQ6VXNlcjM4OTY2NTU4", "avatar_url": "https://avatars.githubusercontent.com/u/38966558?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanllearnn", "html_url": "https://github.com/yanllearnn", "followers_url": "https://api.github.com/users/yanllearnn/followers", "following_url": "https://api.github.com/users/yanllearnn/following{/other_user}", "gists_url": "https://api.github.com/users/yanllearnn/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanllearnn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanllearnn/subscriptions", "organizations_url": "https://api.github.com/users/yanllearnn/orgs", "repos_url": "https://api.github.com/users/yanllearnn/repos", "events_url": "https://api.github.com/users/yanllearnn/events{/privacy}", "received_events_url": "https://api.github.com/users/yanllearnn/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hi ! The message you're getting is not an error. It simply says that your JSON dataset is being prepared to a location in `/root/.cache/huggingface/datasets`", "> \r\n\r\nbut I want to load local JSON file by command\r\n`python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`\r\n\r\n**squad-retrain-data/train-v2.0.json** is the local JSON file, how to load it and map it to a special structure?", "You can load it with `dataset = datasets.load_dataset('json', data_files=args.dataset)` as you said.\r\nThen if you need to apply additional processing to map it to a special structure, you can use rename columns or use `dataset.map`. For more information, you can check the documentation here: https://huggingface.co/docs/datasets/process.html\r\n\r\nAlso feel free to share your `run.py` code so we can take a look", "```\r\n# Dataset selection\r\n if args.dataset.endswith('.json') or args.dataset.endswith('.jsonl'):\r\n dataset_id = None\r\n # Load from local json/jsonl file\r\n dataset = datasets.load_dataset('json', data_files=args.dataset)\r\n # By default, the \"json\" dataset loader places all examples in the train split,\r\n # so if we want to use a jsonl file for evaluation we need to get the \"train\" split\r\n # from the loaded dataset\r\n eval_split = 'train'\r\n else:\r\n default_datasets = {'qa': ('squad',), 'nli': ('snli',)}\r\n dataset_id = tuple(args.dataset.split(':')) if args.dataset is not None else \\\r\n default_datasets[args.task]\r\n # MNLI has two validation splits (one with matched domains and one with mismatched domains). Most datasets just have one \"validation\" split\r\n eval_split = 'validation_matched' if dataset_id == ('glue', 'mnli') else 'validation'\r\n # Load the raw data\r\n dataset = datasets.load_dataset(*dataset_id)\r\n```\r\n\r\nI want to load JSON squad dataset instead `dataset = datasets.load_dataset('squad')` to retrain the model. \r\n", "If your JSON has the same format as the SQuAD dataset, then you need to pass `field=\"data\"` to `load_dataset`, since the SQuAD format is one big JSON object in which the \"data\" field contains the list of questions and answers.\r\n```python\r\ndataset = datasets.load_dataset('json', data_files=args.dataset, field=\"data\")\r\n```\r\n\r\nLet me know if that helps :)\r\n\r\n", "Yes, code works. but the format is not as expected.\r\n```\r\ndataset = datasets.load_dataset('json', data_files=args.dataset, field=\"data\")\r\n```\r\n```\r\npython3 run.py --do_train --task qa --dataset squad --output_dir ./re_trained_model/\r\n```\r\n************ train_dataset: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 87599\r\n})\r\n\r\n\r\n```\r\npython3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/\r\n```\r\n************ train_dataset: Dataset({\r\n features: ['title', 'paragraphs'],\r\n num_rows: 442\r\n})\r\n\r\nI want the JSON to have the same format as before features. https://github.com/huggingface/datasets/blob/master/datasets/squad_v2/squad_v2.py is the script dealing with **squad** but how can I apply it by using JSON? ", "Ok I see, you have the paragraphs so you just need to process them to extract the questions and answers. I think you can process the SQuAD-like data this way:\r\n```python\r\ndef process_squad(articles):\r\n out = {\r\n \"title\": [],\r\n \"context\": [],\r\n \"question\": [],\r\n \"id\": [],\r\n \"answers\": [],\r\n }\r\n for title, paragraphs in zip(articles[\"title\"], articles[\"paragraphs\"]):\r\n for paragraph in paragraphs:\r\n for qa in paragraph[\"qas\"]:\r\n out[\"title\"].append(title)\r\n out[\"context\"].append(paragraph[\"context\"])\r\n out[\"question\"].append(qa[\"question\"])\r\n out[\"id\"].append(qa[\"id\"])\r\n out[\"answers\"].append({\r\n \"answer_start\": [answer[\"answer_start\"] for answer in qa[\"answers\"]],\r\n \"text\": [answer[\"text\"] for answer in qa[\"answers\"]],\r\n })\r\n return out\r\n\r\ndataset = dataset.map(process_squad, batched=True, remove_columns=[\"paragraphs\"])\r\n```\r\n\r\nI adapted the code from [squad.py](https://github.com/huggingface/datasets/blob/master/datasets/squad/squad.py). The code takes as input a batch of articles (title + paragraphs) and gets all the questions and answers from the JSON structure.\r\n\r\nThe output is a dataset with `features: ['answers', 'context', 'id', 'question', 'title']`\r\n\r\nLet me know if that helps !\r\n", "Yes, this works. But how to get the training output during training the squad by **Trainer** \r\nfor example https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/trainer_qa.py \r\nI want the training inputs, labels, outputs for every epoch and step to produce the training dynamic graph", "I think you may need to implement your own Trainer, from the `QuestionAnsweringTrainer` for example.\r\nThis way you can have the flexibility of saving all the inputs/output used at each step", "does there have any function to be overwritten to do this?", "> does there have any function to be overwritten to do this?\r\n\r\nok, I overwrote the compute_loss, thank you.", "Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs\r\n\r\n```\r\n*********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0],\r\n ...,\r\n [1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0]], device='cuda:0'), 'end_positions': tensor([ 25, 97, 93, 44, 25, 112, 109, 134], device='cuda:0'), 'input_ids': tensor([[ 101, 2054, 2390, ..., 0, 0, 0],\r\n [ 101, 2054, 2515, ..., 0, 0, 0],\r\n [ 101, 2054, 2106, ..., 0, 0, 0],\r\n ...,\r\n [ 101, 2339, 2001, ..., 0, 0, 0],\r\n [ 101, 2054, 2515, ..., 0, 0, 0],\r\n [ 101, 2054, 2003, ..., 0, 0, 0]], device='cuda:0'), 'start_positions': tensor([ 20, 90, 89, 41, 25, 96, 106, 132], device='cuda:0'), 'token_type_ids': tensor([[0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0],\r\n ...,\r\n [0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0]], device='cuda:0')} \r\n```\r\n\r\n```\r\n# This function preprocesses a question answering dataset, tokenizing the question and context text\r\n# and finding the right offsets for the answer spans in the tokenized context (to use as labels).\r\n# Adapted from https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py\r\ndef prepare_train_dataset_qa(examples, tokenizer, max_seq_length=None):\r\n questions = [q.lstrip() for q in examples[\"question\"]]\r\n max_seq_length = tokenizer.model_max_length\r\n # tokenize both questions and the corresponding context\r\n # if the context length is longer than max_length, we split it to several\r\n # chunks of max_length\r\n tokenized_examples = tokenizer(\r\n questions,\r\n examples[\"context\"],\r\n truncation=\"only_second\",\r\n max_length=max_seq_length,\r\n stride=min(max_seq_length // 2, 128),\r\n return_overflowing_tokens=True,\r\n return_offsets_mapping=True,\r\n padding=\"max_length\"\r\n )\r\n\r\n # Since one example might give us several features if it has a long context,\r\n # we need a map from a feature to its corresponding example.\r\n sample_mapping = tokenized_examples.pop(\"overflow_to_sample_mapping\")\r\n # The offset mappings will give us a map from token to character position\r\n # in the original context. This will help us compute the start_positions\r\n # and end_positions to get the final answer string.\r\n offset_mapping = tokenized_examples.pop(\"offset_mapping\")\r\n\r\n tokenized_examples[\"start_positions\"] = []\r\n tokenized_examples[\"end_positions\"] = []\r\n\r\n tokenized_examples[\"example_id\"] = []\r\n\r\n for i, offsets in enumerate(offset_mapping):\r\n input_ids = tokenized_examples[\"input_ids\"][i]\r\n # We will label features not containing the answer the index of the CLS token.\r\n cls_index = input_ids.index(tokenizer.cls_token_id)\r\n sequence_ids = tokenized_examples.sequence_ids(i)\r\n # from the feature idx to sample idx\r\n sample_index = sample_mapping[i]\r\n # get the answer for a feature\r\n answers = examples[\"answers\"][sample_index]\r\n\r\n tokenized_examples[\"example_id\"].append(examples[\"id\"][sample_index])\r\n\r\n if len(answers[\"answer_start\"]) == 0:\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Start/end character index of the answer in the text.\r\n start_char = answers[\"answer_start\"][0]\r\n end_char = start_char + len(answers[\"text\"][0])\r\n\r\n # Start token index of the current span in the text.\r\n token_start_index = 0\r\n while sequence_ids[token_start_index] != 1:\r\n token_start_index += 1\r\n\r\n # End token index of the current span in the text.\r\n token_end_index = len(input_ids) - 1\r\n while sequence_ids[token_end_index] != 1:\r\n token_end_index -= 1\r\n\r\n # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).\r\n if not (offsets[token_start_index][0] <= start_char and\r\n offsets[token_end_index][1] >= end_char):\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Otherwise move the token_start_index and token_end_index to the two ends of the answer.\r\n # Note: we could go after the last offset if the answer is the last word (edge case).\r\n while token_start_index < len(offsets) and \\\r\n offsets[token_start_index][0] <= start_char:\r\n token_start_index += 1\r\n tokenized_examples[\"start_positions\"].append(\r\n token_start_index - 1)\r\n while offsets[token_end_index][1] >= end_char:\r\n token_end_index -= 1\r\n tokenized_examples[\"end_positions\"].append(token_end_index + 1)\r\n\r\n return tokenized_examples\r\n```" ]
1,638,109,798,000
1,638,351,271,000
1,638,331,068,000
NONE
null
Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command `!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/` change the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html `dataset = datasets.load_dataset('json', data_files=args.dataset)` Errors: `Downloading and preparing dataset json/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/json/default-c1e124ad488911b8/0.0.0/45636811569ec4a6630521c18235dfbbab83b7ab572e3393c5ba68ccabe98264... ` _Originally posted by @yanllearnn in https://github.com/huggingface/datasets/issues/730#issuecomment-981095050_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3333/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3333/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3332
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3332/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3332/comments
https://api.github.com/repos/huggingface/datasets/issues/3332/events
https://github.com/huggingface/datasets/pull/3332
1,065,345,853
PR_kwDODunzps4vGBig
3,332
Fix error message and add extension fallback
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,638,109,529,000
1,638,192,855,000
1,638,192,854,000
CONTRIBUTOR
null
Fix the error message raised if `infered_module_name` is `None` in `CommunityDatasetModuleFactoryWithoutScript.get_module` and make `infer_module_for_data_files` more robust. In the linked issue, `infer_module_for_data_files` returns `None` because `json` is the second most common extension due to the suffix ordering. Now, we go from the most common to the least common extension and try to map it or return `None`. Fix #3331
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3332/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3332/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3332", "html_url": "https://github.com/huggingface/datasets/pull/3332", "diff_url": "https://github.com/huggingface/datasets/pull/3332.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3332.patch", "merged_at": 1638192854000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3331
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3331/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3331/comments
https://api.github.com/repos/huggingface/datasets/issues/3331/events
https://github.com/huggingface/datasets/issues/3331
1,065,275,896
I_kwDODunzps4_ftH4
3,331
AttributeError: 'CommunityDatasetModuleFactoryWithoutScript' object has no attribute 'path'
{ "login": "luozhouyang", "id": 34032031, "node_id": "MDQ6VXNlcjM0MDMyMDMx", "avatar_url": "https://avatars.githubusercontent.com/u/34032031?v=4", "gravatar_id": "", "url": "https://api.github.com/users/luozhouyang", "html_url": "https://github.com/luozhouyang", "followers_url": "https://api.github.com/users/luozhouyang/followers", "following_url": "https://api.github.com/users/luozhouyang/following{/other_user}", "gists_url": "https://api.github.com/users/luozhouyang/gists{/gist_id}", "starred_url": "https://api.github.com/users/luozhouyang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/luozhouyang/subscriptions", "organizations_url": "https://api.github.com/users/luozhouyang/orgs", "repos_url": "https://api.github.com/users/luozhouyang/repos", "events_url": "https://api.github.com/users/luozhouyang/events{/privacy}", "received_events_url": "https://api.github.com/users/luozhouyang/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hi,\r\n\r\nthe fix was merged and will be available in the next release of `datasets`.\r\nIn the meantime, you can use it by installing `datasets` directly from master as follows:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```" ]
1,638,089,645,000
1,638,193,784,000
1,638,192,854,000
NONE
null
## Describe the bug I add a new question answering dataset to huggingface datasets manually. Here is the link: [luozhouyang/question-answering-datasets](https://huggingface.co/datasets/luozhouyang/question-answering-datasets) But when I load the dataset, an error raised: ```bash AttributeError: 'CommunityDatasetModuleFactoryWithoutScript' object has no attribute 'path' ``` ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("luozhouyang/question-answering-datasets", data_files=["dureader_robust.train.json"]) ``` ## Expected results Load dataset successfully without any error. ## Actual results ```bash Traceback (most recent call last): File "/mnt/home/zhouyang.lzy/github/naivenlp/naivenlp/tests/question_answering_tests/dataset_test.py", line 89, in test_load_dataset_with_hf data_files=["dureader_robust.train.json"], File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 1616, in load_dataset **config_kwargs, File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 1443, in load_dataset_builder path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 1157, in dataset_module_factory raise e1 from None File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 1144, in dataset_module_factory download_mode=download_mode, File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 798, in get_module raise FileNotFoundError(f"No data files or dataset script found in {self.path}") AttributeError: 'CommunityDatasetModuleFactoryWithoutScript' object has no attribute 'path' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: linux - Python version: 3.6.13 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3331/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3331/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3330
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3330/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3330/comments
https://api.github.com/repos/huggingface/datasets/issues/3330/events
https://github.com/huggingface/datasets/pull/3330
1,065,176,619
PR_kwDODunzps4vFtF7
3,330
Change TriviaQA license (#3313)
{ "login": "avinashsai", "id": 22453634, "node_id": "MDQ6VXNlcjIyNDUzNjM0", "avatar_url": "https://avatars.githubusercontent.com/u/22453634?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avinashsai", "html_url": "https://github.com/avinashsai", "followers_url": "https://api.github.com/users/avinashsai/followers", "following_url": "https://api.github.com/users/avinashsai/following{/other_user}", "gists_url": "https://api.github.com/users/avinashsai/gists{/gist_id}", "starred_url": "https://api.github.com/users/avinashsai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avinashsai/subscriptions", "organizations_url": "https://api.github.com/users/avinashsai/orgs", "repos_url": "https://api.github.com/users/avinashsai/repos", "events_url": "https://api.github.com/users/avinashsai/events{/privacy}", "received_events_url": "https://api.github.com/users/avinashsai/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,638,070,005,000
1,638,185,061,000
1,638,185,061,000
CONTRIBUTOR
null
Fixes (#3313)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3330/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3330/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3330", "html_url": "https://github.com/huggingface/datasets/pull/3330", "diff_url": "https://github.com/huggingface/datasets/pull/3330.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3330.patch", "merged_at": 1638185061000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3329
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3329/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3329/comments
https://api.github.com/repos/huggingface/datasets/issues/3329/events
https://github.com/huggingface/datasets/issues/3329
1,065,096,971
I_kwDODunzps4_fBcL
3,329
Map function: Type error on iter #999
{ "login": "josephkready666", "id": 52659318, "node_id": "MDQ6VXNlcjUyNjU5MzE4", "avatar_url": "https://avatars.githubusercontent.com/u/52659318?v=4", "gravatar_id": "", "url": "https://api.github.com/users/josephkready666", "html_url": "https://github.com/josephkready666", "followers_url": "https://api.github.com/users/josephkready666/followers", "following_url": "https://api.github.com/users/josephkready666/following{/other_user}", "gists_url": "https://api.github.com/users/josephkready666/gists{/gist_id}", "starred_url": "https://api.github.com/users/josephkready666/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/josephkready666/subscriptions", "organizations_url": "https://api.github.com/users/josephkready666/orgs", "repos_url": "https://api.github.com/users/josephkready666/repos", "events_url": "https://api.github.com/users/josephkready666/events{/privacy}", "received_events_url": "https://api.github.com/users/josephkready666/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hi, thanks for reporting.\r\n\r\nIt would be really helpful if you could provide the actual code of the `text_numbers_to_int` function so we can reproduce the error.", "```\r\ndef text_numbers_to_int(text, column=\"\"):\r\n \"\"\"\r\n Convert text numbers to int.\r\n\r\n :param text: text numbers\r\n :return: int\r\n \"\"\"\r\n try:\r\n numbers = find_numbers(text)\r\n if not numbers:\r\n return text\r\n result = \"\"\r\n i, j = 0, 0\r\n while i < len(text):\r\n if j < len(numbers) and i == numbers[j][1]:\r\n n = int(numbers[j][0]) if numbers[j][0] % 1 == 0 else float(numbers[j][0])\r\n result += str(n)\r\n i = numbers[j][2] #end\r\n j += 1\r\n else:\r\n result += text[i]\r\n i += 1\r\n if column:\r\n return{column: result}\r\n else:\r\n return {column: result}\r\n except Exception as e:\r\n print(e)\r\n return {column: result}\r\n```", "Maybe this is because of the `return text` line ? I think it should return a dictionary rather than a string", "Yes that was it, good catch! Thanks" ]
1,638,035,585,000
1,638,218,415,000
1,638,218,415,000
NONE
null
## Describe the bug Using the map function, it throws a type error on iter #999 Here is the code I am calling: ``` dataset = datasets.load_dataset('squad') dataset['validation'].map(text_numbers_to_int, input_columns=['context'], fn_kwargs={'column': 'context'}) ``` text_numbers_to_int returns the input text with numbers replaced in the format {'context': text} It happens at ` File "C:\Users\lonek\anaconda3\envs\ai\Lib\site-packages\datasets\arrow_writer.py", line 289, in <listcomp> [row[0][col] for row in self.current_examples], type=col_type, try_type=col_try_type, col=col ` The issue is that the list comprehension expects self.current_examples to be type tuple(dict, str), but for some reason 26 out of 1000 of the sefl.current_examples are type tuple(str, str) Here is an example of what self.current_examples should be ({'context': 'Super Bowl 50 was an...merals 50.'}, '') Here is an example of what self.current_examples are when it throws the error: ('The Panthers used th... Marriott.', '')
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3329/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3329/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3328
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3328/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3328/comments
https://api.github.com/repos/huggingface/datasets/issues/3328/events
https://github.com/huggingface/datasets/pull/3328
1,065,015,262
PR_kwDODunzps4vFTpW
3,328
Quick fix error formatting
{ "login": "NouamaneTazi", "id": 29777165, "node_id": "MDQ6VXNlcjI5Nzc3MTY1", "avatar_url": "https://avatars.githubusercontent.com/u/29777165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NouamaneTazi", "html_url": "https://github.com/NouamaneTazi", "followers_url": "https://api.github.com/users/NouamaneTazi/followers", "following_url": "https://api.github.com/users/NouamaneTazi/following{/other_user}", "gists_url": "https://api.github.com/users/NouamaneTazi/gists{/gist_id}", "starred_url": "https://api.github.com/users/NouamaneTazi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NouamaneTazi/subscriptions", "organizations_url": "https://api.github.com/users/NouamaneTazi/orgs", "repos_url": "https://api.github.com/users/NouamaneTazi/repos", "events_url": "https://api.github.com/users/NouamaneTazi/events{/privacy}", "received_events_url": "https://api.github.com/users/NouamaneTazi/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,638,013,668,000
1,638,192,762,000
1,638,192,762,000
CONTRIBUTOR
null
While working on a dataset, I got the error ``` TypeError: Provided `function` which is applied to all elements of table returns a `dict` of types {[type(x) for x in processed_inputs.values()]}. When using `batched=True`, make sure provided `function` returns a `dict` of types like `{allowed_batch_return_types}`. ``` This PR should fix the formatting of this error
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3328/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3328", "html_url": "https://github.com/huggingface/datasets/pull/3328", "diff_url": "https://github.com/huggingface/datasets/pull/3328.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3328.patch", "merged_at": 1638192762000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3327
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3327/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3327/comments
https://api.github.com/repos/huggingface/datasets/issues/3327/events
https://github.com/huggingface/datasets/issues/3327
1,064,675,888
I_kwDODunzps4_daow
3,327
"Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)"
{ "login": "eliasws", "id": 19492473, "node_id": "MDQ6VXNlcjE5NDkyNDcz", "avatar_url": "https://avatars.githubusercontent.com/u/19492473?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eliasws", "html_url": "https://github.com/eliasws", "followers_url": "https://api.github.com/users/eliasws/followers", "following_url": "https://api.github.com/users/eliasws/following{/other_user}", "gists_url": "https://api.github.com/users/eliasws/gists{/gist_id}", "starred_url": "https://api.github.com/users/eliasws/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eliasws/subscriptions", "organizations_url": "https://api.github.com/users/eliasws/orgs", "repos_url": "https://api.github.com/users/eliasws/repos", "events_url": "https://api.github.com/users/eliasws/events{/privacy}", "received_events_url": "https://api.github.com/users/eliasws/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "#3323 " ]
1,637,943,996,000
1,637,945,051,000
1,637,945,051,000
CONTRIBUTOR
null
## Describe the bug Passing a correctly shaped Numpy-Array to get_nearest_examples leads to the Exception "Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)" Probably the reason for this is a wrongly converted assertion. 1.15.1: `assert len(query.shape) == 1 or (len(query.shape) == 2 and query.shape[0] == 1)` 1.16.1: ``` if len(query.shape) != 1 or (len(query.shape) == 2 and query.shape[0] != 1): raise ValueError("Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)") ``` ## Steps to reproduce the bug follow the steps described here: https://huggingface.co/course/chapter5/6?fw=tf ```python question_embedding.shape # (1, 768) scores, samples = embeddings_dataset.get_nearest_examples( "embeddings", question_embedding, k=5 # Error ) # "Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)" ``` ## Expected results Should work without exception ## Actual results Throws exception ## Environment info - `datasets` version: 1.15.1 - Platform: Darwin-20.6.0-x86_64-i386-64bit - Python version: 3.7.12 - PyArrow version: 6.0.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3327/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3326
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3326/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3326/comments
https://api.github.com/repos/huggingface/datasets/issues/3326/events
https://github.com/huggingface/datasets/pull/3326
1,064,664,479
PR_kwDODunzps4vEaYG
3,326
Fix import `datasets` on python 3.10
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,637,943,000,000
1,637,944,283,000
1,637,944,283,000
MEMBER
null
In python 3.10 it's no longer possible to use `functools.wraps` on a method decorated with `classmethod`. To fix this I inverted the order of the `inject_arrow_table_documentation` and `classmethod` decorators Fix #3324
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3326/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3326/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3326", "html_url": "https://github.com/huggingface/datasets/pull/3326", "diff_url": "https://github.com/huggingface/datasets/pull/3326.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3326.patch", "merged_at": 1637944283000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3325
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3325/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3325/comments
https://api.github.com/repos/huggingface/datasets/issues/3325/events
https://github.com/huggingface/datasets/pull/3325
1,064,663,075
PR_kwDODunzps4vEaGO
3,325
Update conda dependencies
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,637,942,887,000
1,637,943,637,000
1,637,943,636,000
MEMBER
null
Some dependencies minimum versions were outdated. For example `pyarrow` and `huggingface_hub`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3325/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3325", "html_url": "https://github.com/huggingface/datasets/pull/3325", "diff_url": "https://github.com/huggingface/datasets/pull/3325.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3325.patch", "merged_at": 1637943636000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3324
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3324/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3324/comments
https://api.github.com/repos/huggingface/datasets/issues/3324/events
https://github.com/huggingface/datasets/issues/3324
1,064,661,212
I_kwDODunzps4_dXDc
3,324
Can't import `datasets` in python 3.10
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,637,942,774,000
1,637,944,283,000
1,637,944,283,000
MEMBER
null
When importing `datasets` I'm getting this error in python 3.10: ```python Traceback (most recent call last): File "<string>", line 1, in <module> File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/__init__.py", line 34, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_dataset.py", line 47, in <module> from .arrow_reader import ArrowReader File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_reader.py", line 33, in <module> from .table import InMemoryTable, MemoryMappedTable, Table, concat_tables File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 334, in <module> class InMemoryTable(TableBlock): File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 361, in InMemoryTable def from_pandas(cls, *args, **kwargs): File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 24, in wrapper out = wraps(arrow_table_method)(method) File "/Users/quentinlhoest/.pyenv/versions/3.10.0/lib/python3.10/functools.py", line 61, in update_wrapper wrapper.__wrapped__ = wrapped AttributeError: readonly attribute ``` This makes the conda build fail. I'm opening a PR to fix this and do a patch release 1.16.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3324/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3324/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3323
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3323/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3323/comments
https://api.github.com/repos/huggingface/datasets/issues/3323/events
https://github.com/huggingface/datasets/pull/3323
1,064,660,452
PR_kwDODunzps4vEZwq
3,323
Fix wrongly converted assert
{ "login": "eliasws", "id": 19492473, "node_id": "MDQ6VXNlcjE5NDkyNDcz", "avatar_url": "https://avatars.githubusercontent.com/u/19492473?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eliasws", "html_url": "https://github.com/eliasws", "followers_url": "https://api.github.com/users/eliasws/followers", "following_url": "https://api.github.com/users/eliasws/following{/other_user}", "gists_url": "https://api.github.com/users/eliasws/gists{/gist_id}", "starred_url": "https://api.github.com/users/eliasws/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eliasws/subscriptions", "organizations_url": "https://api.github.com/users/eliasws/orgs", "repos_url": "https://api.github.com/users/eliasws/repos", "events_url": "https://api.github.com/users/eliasws/events{/privacy}", "received_events_url": "https://api.github.com/users/eliasws/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Closes #3327 " ]
1,637,942,739,000
1,637,945,052,000
1,637,945,051,000
CONTRIBUTOR
null
Seems like this assertion was replaced by an exception but the condition got wrongly converted.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3323/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3323/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3323", "html_url": "https://github.com/huggingface/datasets/pull/3323", "diff_url": "https://github.com/huggingface/datasets/pull/3323.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3323.patch", "merged_at": 1637945051000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3322
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3322/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3322/comments
https://api.github.com/repos/huggingface/datasets/issues/3322/events
https://github.com/huggingface/datasets/pull/3322
1,064,429,705
PR_kwDODunzps4vD1Ct
3,322
Add missing tags to XTREME
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,637,930,225,000
1,638,193,207,000
1,638,193,206,000
CONTRIBUTOR
null
Add missing tags to the XTREME benchmark for better discoverability.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3322/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3322/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3322", "html_url": "https://github.com/huggingface/datasets/pull/3322", "diff_url": "https://github.com/huggingface/datasets/pull/3322.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3322.patch", "merged_at": 1638193206000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3321
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3321/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3321/comments
https://api.github.com/repos/huggingface/datasets/issues/3321/events
https://github.com/huggingface/datasets/pull/3321
1,063,858,386
PR_kwDODunzps4vCBeI
3,321
Update URL of tatoeba subset of xtreme
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "<s>To be more precise: `os.path.join` is replaced on-the-fly by `xjoin` anyway with patching, to extend it to remote files</s>", "Oh actually just ignore what I said: they were used to concatenate URLs, which is not recommended. Let me fix that again by appending using `+`" ]
1,637,865,751,000
1,637,922,630,000
1,637,922,630,000
CONTRIBUTOR
null
Updates the URL of the tatoeba subset of xtreme. Additionally, replaces `os.path.join` with `xjoin` to correctly join the URL segments on Windows. Fix #3320
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3321/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3321", "html_url": "https://github.com/huggingface/datasets/pull/3321", "diff_url": "https://github.com/huggingface/datasets/pull/3321.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3321.patch", "merged_at": 1637922629000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3320
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3320/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3320/comments
https://api.github.com/repos/huggingface/datasets/issues/3320/events
https://github.com/huggingface/datasets/issues/3320
1,063,531,992
I_kwDODunzps4_ZDXY
3,320
Can't get tatoeba.rus dataset
{ "login": "mmg10", "id": 65535131, "node_id": "MDQ6VXNlcjY1NTM1MTMx", "avatar_url": "https://avatars.githubusercontent.com/u/65535131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mmg10", "html_url": "https://github.com/mmg10", "followers_url": "https://api.github.com/users/mmg10/followers", "following_url": "https://api.github.com/users/mmg10/following{/other_user}", "gists_url": "https://api.github.com/users/mmg10/gists{/gist_id}", "starred_url": "https://api.github.com/users/mmg10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mmg10/subscriptions", "organizations_url": "https://api.github.com/users/mmg10/orgs", "repos_url": "https://api.github.com/users/mmg10/repos", "events_url": "https://api.github.com/users/mmg10/events{/privacy}", "received_events_url": "https://api.github.com/users/mmg10/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,637,843,471,000
1,637,922,629,000
1,637,922,629,000
NONE
null
## Describe the bug It gives an error. > FileNotFoundError: Couldn't find file at https://github.com/facebookresearch/LASER/raw/master/data/tatoeba/v1/tatoeba.rus-eng.rus ## Steps to reproduce the bug ```python data=load_dataset("xtreme","tatoeba.rus", split="validation") ``` ## Solution The library tries to access the **master** branch. In the github repo of facebookresearch, it is in the **main** branch.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3320/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3319
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3319/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3319/comments
https://api.github.com/repos/huggingface/datasets/issues/3319/events
https://github.com/huggingface/datasets/pull/3319
1,062,749,654
PR_kwDODunzps4u-xdv
3,319
Add push_to_hub docs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Looks good to me! :)\r\n\r\nMaybe we can mention that users can also set the `private` argument if they want to keep their dataset private? It would lead nicely into the next section on Privacy.", "Thanks for your comments, I fixed the capitalization for consistency and added an passage to mention the `private` parameter and to have a nice transition to the Privacy section :)\r\n\r\nI also added the login instruction that was missing before the user can actually upload a dataset." ]
1,637,778,071,000
1,637,851,666,000
1,637,851,666,000
MEMBER
null
Since #3098 it's now possible to upload a dataset on the Hub directly from python using the `push_to_hub` method. I just added a section in the "Upload a dataset to the Hub" tutorial. I kept the section quite simple but let me know if it sounds good to you @LysandreJik @stevhliu :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3319/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3319/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3319", "html_url": "https://github.com/huggingface/datasets/pull/3319", "diff_url": "https://github.com/huggingface/datasets/pull/3319.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3319.patch", "merged_at": 1637851666000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3318
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3318/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3318/comments
https://api.github.com/repos/huggingface/datasets/issues/3318/events
https://github.com/huggingface/datasets/pull/3318
1,062,369,717
PR_kwDODunzps4u9m-k
3,318
Finish transition to PyArrow 3.0.0
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,637,757,014,000
1,637,768,105,000
1,637,768,104,000
CONTRIBUTOR
null
Finish transition to PyArrow 3.0.0 that was started in #3098.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3318/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3318", "html_url": "https://github.com/huggingface/datasets/pull/3318", "diff_url": "https://github.com/huggingface/datasets/pull/3318.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3318.patch", "merged_at": 1637768104000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3317
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3317/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3317/comments
https://api.github.com/repos/huggingface/datasets/issues/3317/events
https://github.com/huggingface/datasets/issues/3317
1,062,284,447
I_kwDODunzps4_USyf
3,317
Add desc parameter to Dataset filter method
{ "login": "vblagoje", "id": 458335, "node_id": "MDQ6VXNlcjQ1ODMzNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vblagoje", "html_url": "https://github.com/vblagoje", "followers_url": "https://api.github.com/users/vblagoje/followers", "following_url": "https://api.github.com/users/vblagoje/following{/other_user}", "gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}", "starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions", "organizations_url": "https://api.github.com/users/vblagoje/orgs", "repos_url": "https://api.github.com/users/vblagoje/repos", "events_url": "https://api.github.com/users/vblagoje/events{/privacy}", "received_events_url": "https://api.github.com/users/vblagoje/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hi,\r\n\r\n`Dataset.map` allows more generic transforms compared to `Dataset.filter`, which purpose is very specific (to filter examples based on a condition). That's why I don't think we need the `desc` parameter there for consistency. #3196 has added descriptions to the `Dataset` methods that call `.map` internally, but not for the `filter` method, so we should do that.\r\n\r\nDo you have a description in mind? Maybe `\"Filtering the dataset\"` or `\"Filtering the indices\"`? If yes, feel free to open a PR.", "I'm personally ok with adding the `desc` parameter actually. Let's say you have different filters, it can be nice to differentiate between the different filters when they're running no ?", "@mariosasko the use case is filtering of a dataset prior to tokenization and subsequent training. As the dataset is huge it's just a matter of giving a user (model trainer) some feedback on what's going on. Otherwise, feedback is given for all steps in training preparation and not for filtering and the filtering in my use case lasts about 4-5 minutes. And yes, if there are more filtering stages, as @lhoestq pointed out, it would be nice to give some feedback. I thought desc is there already and got confused when I got the script error. ", "I don't have a strong opinion on that, so having `desc` as a parameter is also OK." ]
1,637,751,696,000
1,641,407,484,000
1,641,407,484,000
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** As I was filtering very large datasets I noticed the filter method doesn't have the desc parameter which is available in the map method. Why don't we add a desc parameter to the filter method both for consistency and it's nice to give some feedback to users during long operations on Datasets? **Describe the solution you'd like** Add desc parameter to Dataset filter method **Describe alternatives you've considered** N/A **Additional context** N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3317/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3317/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3316
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3316/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3316/comments
https://api.github.com/repos/huggingface/datasets/issues/3316/events
https://github.com/huggingface/datasets/issues/3316
1,062,185,822
I_kwDODunzps4_T6te
3,316
Add RedCaps dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 3608941089, "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision", "name": "vision", "color": "bfdadc", "default": false, "description": "Vision datasets" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,637,745,782,000
1,641,996,795,000
1,641,996,795,000
MEMBER
null
## Adding a Dataset - **Name:** RedCaps - **Description:** Web-curated image-text data created by the people, for the people - **Paper:** https://arxiv.org/abs/2111.11431 - **Data:** https://redcaps.xyz/ - **Motivation:** Multimodal image-text dataset: 12M+ Image-text pairs Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Proposed by @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3316/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3315
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3315/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3315/comments
https://api.github.com/repos/huggingface/datasets/issues/3315/events
https://github.com/huggingface/datasets/pull/3315
1,061,678,452
PR_kwDODunzps4u7WpU
3,315
Removing query params for dynamic URL caching
{ "login": "anton-l", "id": 26864830, "node_id": "MDQ6VXNlcjI2ODY0ODMw", "avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anton-l", "html_url": "https://github.com/anton-l", "followers_url": "https://api.github.com/users/anton-l/followers", "following_url": "https://api.github.com/users/anton-l/following{/other_user}", "gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}", "starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anton-l/subscriptions", "organizations_url": "https://api.github.com/users/anton-l/orgs", "repos_url": "https://api.github.com/users/anton-l/repos", "events_url": "https://api.github.com/users/anton-l/events{/privacy}", "received_events_url": "https://api.github.com/users/anton-l/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "IMO it makes more sense to have `ignore_url_params` as an attribute of `DownloadConfig` to avoid defining a new argument in `DownloadManger`'s methods.", "@mariosasko that would make sense to me too, but it seems like `DownloadConfig` wasn't intended to be modified from a dataset loading script. @lhoestq wdyt?", "We can expose `DownloadConfig` as a property of `DownloadManager`, and then in the script before the download call we could do: `dl_manager.download_config.ignore_url_params = True`. But yes, let's hear what Quentin thinks.", "Oh indeed that's a great idea. This parameter is similar to others like `download_config.use_etag` that defines the behavior of the download and caching, so it's better if we have it there, and expose the `download_config`", "Implemented it via `dl_manager.download_config.ignore_url_params` now, and also added a usage example above :) " ]
1,637,699,052,000
1,637,851,472,000
1,637,851,471,000
CONTRIBUTOR
null
The main use case for this is to make dynamically generated private URLs (like the ones returned by CommonVoice API) compatible with the datasets' caching logic. Usage example: ```python import datasets class CommonVoice(datasets.GeneratorBasedBuilder): def _info(self): return datasets.DatasetInfo() def _split_generators(self, dl_manager): dl_manager.download_config.ignore_url_params = True HUGE_URL = "https://mozilla-common-voice-datasets.s3.dualstack.us-west-2.amazonaws.com/cv-corpus-7.0-2021-07-21/cv-corpus-7.0-2021-07-21-ab.tar.gz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIAQ3GQRTO3IU5JYB5K%2F20211125%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Date=20211125T131423Z&X-Amz-Expires=43200&X-Amz-Security-Token=FwoGZXIvYXdzEL7%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaDLsZw7Nj0d9h4rgheyKSBJJ6bxo1JdWLXAUhLMrUB8AXfhP8Ge4F8dtjwXmvGJgkIvdMT7P4YOEE1pS3mW8AyKsz7Z7IRVCIGQrOH1AbxGVVcDoCMMswXEOqL3nJFihKLf99%2F6l8iJVZdzftRUNgMhX5Hz0xSIL%2BzRDpH5nYa7C6YpEdOdW81CFVXybx7WUrX13wc8X4ZlUj7zrWcWf5p2VEIU5Utb7YHVi0Y5TQQiZSDoedQl0j4VmMuFkDzoobIO%2BvilgGeE2kIX0E62X423mEGNu4uQV5JsOuLAtv3GVlemsqEH3ZYrXDuxLmnvGj5HfMtySwI4vKv%2BlnnirD29o7hxvtidXiA8JMWhp93aP%2Fw7sod%2BPPbb5EqP%2B4Qb2GJ1myClOKcLEY0cqoy7XWm8NeVljLJojnFJVS5mNFBAzCCTJ%2FidxNsj8fflzkRoAzYaaPBuOTL1dgtZCdslK3FAuEvw0cik7P9A7IYiULV33otSHKMPcVfNHFsWQljs03gDztsIUWxaXvu6ck5vCcGULsHbfe6xoMPm2bR9jtKLONsslPcnzWIf7%2Fch2w%2F%2BjtTCd9IxaH4kytyJ6mIjpV%2FA%2F2h9qeDnDFsCphnMjAzPQn6tqCgTtPcyJ2b8c94ncgUnE4mepx%2FDa%2FanAEsrg9RPdmbdoPswzHn1IClh91IfSN74u95DZUxlPeZrHG5HxVCN3dKO6j%2Ft1xd20L0hEtazDdKOr8%2FYwGMirp8rp%2BII0pYOwQOrYHqH%2FREX2dRJctJtwE86Qj1eU8BAdXuFIkLC4NWXw%3D&X-Amz-Signature=1b8108d29b0e9c2bf6c7246e58ca8d5749a83de0704757ad8e8a44d78194691f&X-Amz-SignedHeaders=host" dl_path = dl_manager.download_and_extract(HUGE_URL) print(dl_path) HUGE_URL += "&some_new_or_changed_param=12345" dl_path = dl_manager.download_and_extract(HUGE_URL) print(dl_path) dl_manager = datasets.DownloadManager(dataset_name="common_voice") CommonVoice()._split_generators(dl_manager) ``` Output: ``` /home/user/.cache/huggingface/datasets/downloads/6ef2a377398ff3309554be040caa78414e6562d623dbd0ce8fc262459a7f8ec6 /home/user/.cache/huggingface/datasets/downloads/6ef2a377398ff3309554be040caa78414e6562d623dbd0ce8fc262459a7f8ec6 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3315/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3315/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3315", "html_url": "https://github.com/huggingface/datasets/pull/3315", "diff_url": "https://github.com/huggingface/datasets/pull/3315.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3315.patch", "merged_at": 1637851471000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3314
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3314/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3314/comments
https://api.github.com/repos/huggingface/datasets/issues/3314/events
https://github.com/huggingface/datasets/pull/3314
1,061,448,227
PR_kwDODunzps4u6mdX
3,314
Adding arg to pass process rank to `map`
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Some commits seem to be there twice (made the mistake of rebasing because I wasn't sure whether the doc had changed), is this an issue @lhoestq ?" ]
1,637,682,921,000
1,637,754,853,000
1,637,754,853,000
MEMBER
null
This PR adds a `with_rank` argument to `map` that gives the user the possibility to pass the rank of each process to their function. This is mostly designed for multi-GPU map (each process can be sent to a different device thanks to the rank). I've also added tests. I'm putting the PR up so you can check the code, I'll add a multi-GPU example to the doc (+ write a bit in the doc for the new arg)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3314/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3314", "html_url": "https://github.com/huggingface/datasets/pull/3314", "diff_url": "https://github.com/huggingface/datasets/pull/3314.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3314.patch", "merged_at": 1637754853000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3313
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3313/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3313/comments
https://api.github.com/repos/huggingface/datasets/issues/3313/events
https://github.com/huggingface/datasets/issues/3313
1,060,933,392
I_kwDODunzps4_PI8Q
3,313
TriviaQA License Mismatch
{ "login": "akhilkedia", "id": 16665267, "node_id": "MDQ6VXNlcjE2NjY1MjY3", "avatar_url": "https://avatars.githubusercontent.com/u/16665267?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akhilkedia", "html_url": "https://github.com/akhilkedia", "followers_url": "https://api.github.com/users/akhilkedia/followers", "following_url": "https://api.github.com/users/akhilkedia/following{/other_user}", "gists_url": "https://api.github.com/users/akhilkedia/gists{/gist_id}", "starred_url": "https://api.github.com/users/akhilkedia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akhilkedia/subscriptions", "organizations_url": "https://api.github.com/users/akhilkedia/orgs", "repos_url": "https://api.github.com/users/akhilkedia/repos", "events_url": "https://api.github.com/users/akhilkedia/events{/privacy}", "received_events_url": "https://api.github.com/users/akhilkedia/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hi ! You're completely right, this must be mentioned in the dataset card.\r\nIf you're interesting in contributing, feel free to open a pull request to mention this in the `trivia_qa` dataset card in the \"Licensing Information\" section at https://github.com/huggingface/datasets/blob/master/datasets/trivia_qa/README.md" ]
1,637,654,415,000
1,638,185,061,000
1,638,185,061,000
NONE
null
## Describe the bug TriviaQA Webpage at http://nlp.cs.washington.edu/triviaqa/ says they do not own the copyright to the data. However, Huggingface datasets at https://huggingface.co/datasets/trivia_qa mentions that the dataset is released under Apache License Is the License Information on HuggingFace correct?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3313/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3313/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3312
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3312/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3312/comments
https://api.github.com/repos/huggingface/datasets/issues/3312/events
https://github.com/huggingface/datasets/pull/3312
1,060,440,346
PR_kwDODunzps4u3duV
3,312
add bl books genre dataset
{ "login": "davanstrien", "id": 8995957, "node_id": "MDQ6VXNlcjg5OTU5NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davanstrien", "html_url": "https://github.com/davanstrien", "followers_url": "https://api.github.com/users/davanstrien/followers", "following_url": "https://api.github.com/users/davanstrien/following{/other_user}", "gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}", "starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions", "organizations_url": "https://api.github.com/users/davanstrien/orgs", "repos_url": "https://api.github.com/users/davanstrien/repos", "events_url": "https://api.github.com/users/davanstrien/events{/privacy}", "received_events_url": "https://api.github.com/users/davanstrien/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "To fix the CI, feel free to run the `make style` command to format the code.\r\n\r\nThen it also looks like the dummy_data.zip archives are all empty, which makes the tests fail. Can you try regenerating them ? They should have one file inside which is a dummy version of the file at https://bl.iro.bl.uk/downloads/36c7cd20-c8a7-4495-acbe-469b9132c6b1?locale=en", "@lhoestq, thanks for that feedback. \r\n\r\nI should have made most of these changes now. The `--auto_generate` flag wasn't working because the file wasn't downloaded with a `.csv` extension. I used `--match_text_files \"*\"` to get around this. Because there is a lot of data that isn't annotated using the default line number for the dummy data causes the `annotated_raw` and the `title_genre_classifiction` configs to fail because they don't generate any examples — bumping the line numbers to `250` fixes this. This does make the dummy data a bit bigger, though. \r\n\r\nThe total directory size for the dataset is now `150kb`. Is this okay, or do you want me to generate the dummy data manually instead? ", "Hi ! yes 150kB is fine :)\r\nFeel free to push your new dummy_data.zip files (I think the current one are still the empty ones)", "@lhoestq I've pushed those dummy files now and added your other suggestions.", "The CI failure is unrelated to this PR, merging :)", "@lhoestq, thanks for all your help with this pull request 😀" ]
1,637,603,690,000
1,638,461,429,000
1,638,461,267,000
CONTRIBUTOR
null
First of all thanks for the fantastic library/collection of datasets 🤗 This pull request adds a dataset of metadata from digitised (mostly 19th Century) books from the British Library The [data](https://bl.iro.bl.uk/concern/datasets/1e1ccb46-65b4-4481-b6f8-b8129d5da053) contains various metadata about the books. In addition, a subset of the data includes 'genre' information which can be used for supervised text classification tasks. I hope that this offers easier access to a dataset for doing text classification on GLAM (galleries, libraries, archives and museums) data. I have tried to create three configurations that provide both an 'easy' version of the dataset if you want to use it for training a genre classification model and a more 'raw' version of the data for other potential use cases for the data. I am open to suggestions if this doesn't make sense. Similarly, for some of the arrow datatypes, I have had to fall back to strings since there are missing values for some fields/rows but I may have missed a more elegant way of dealing with it.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3312/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3312/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3312", "html_url": "https://github.com/huggingface/datasets/pull/3312", "diff_url": "https://github.com/huggingface/datasets/pull/3312.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3312.patch", "merged_at": 1638461267000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3311
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3311/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3311/comments
https://api.github.com/repos/huggingface/datasets/issues/3311/events
https://github.com/huggingface/datasets/issues/3311
1,060,387,957
I_kwDODunzps4_NDx1
3,311
Add WebSRC
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,637,600,313,000
1,637,600,313,000
null
NONE
null
## Adding a Dataset - **Name:** WebSRC - **Description:** WebSRC is a novel Web-based Structural Reading Comprehension dataset. It consists of 0.44M question-answer pairs, which are collected from 6.5K web pages with corresponding HTML source code, screenshots and metadata. - **Paper:** https://arxiv.org/abs/2101.09465 - **Data:** https://x-lance.github.io/WebSRC/dashboard.html# - **Motivation:** Currently adding MarkupLM to HuggingFace Transformers, which achieves SOTA on this dataset. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3311/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3311/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3310
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3310/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3310/comments
https://api.github.com/repos/huggingface/datasets/issues/3310/events
https://github.com/huggingface/datasets/issues/3310
1,060,098,104
I_kwDODunzps4_L9A4
3,310
Fatal error condition occurred in aws-c-io
{ "login": "Crabzmatic", "id": 31850219, "node_id": "MDQ6VXNlcjMxODUwMjE5", "avatar_url": "https://avatars.githubusercontent.com/u/31850219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Crabzmatic", "html_url": "https://github.com/Crabzmatic", "followers_url": "https://api.github.com/users/Crabzmatic/followers", "following_url": "https://api.github.com/users/Crabzmatic/following{/other_user}", "gists_url": "https://api.github.com/users/Crabzmatic/gists{/gist_id}", "starred_url": "https://api.github.com/users/Crabzmatic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Crabzmatic/subscriptions", "organizations_url": "https://api.github.com/users/Crabzmatic/orgs", "repos_url": "https://api.github.com/users/Crabzmatic/repos", "events_url": "https://api.github.com/users/Crabzmatic/events{/privacy}", "received_events_url": "https://api.github.com/users/Crabzmatic/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hi ! Are you having this issue only with this specific dataset, or it also happens with other ones like `squad` ?", "@lhoestq It happens also on `squad`. It successfully downloads the whole dataset and then crashes on: \r\n\r\n```\r\nFatal error condition occurred in D:\\bld\\aws-c-io_1633633258269\\work\\source\\event_loop.c:74: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS\r\nExiting Application\r\n```\r\n\r\nI tested it on Ubuntu and its working OK. Didn't test on non-preview version of Windows 11, `Windows-10-10.0.22504-SP0` is a preview version, not sure if this is causing it.", "I see the same error in Windows-10.0.19042 as of a few days ago:\r\n\r\n`Fatal error condition occurred in D:\\bld\\aws-c-io_1633633258269\\work\\source\\event_loop.c:74: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS`\r\n\r\npython 3.8.12 h7840368_2_cpython conda-forge\r\nboto3 1.20.11 pyhd8ed1ab_0 conda-forge\r\nbotocore 1.23.11 pyhd8ed1ab_0 conda-forge\r\n\r\n...but I am not using `datasets` (although I might take a look now that I know about it!)\r\n\r\nThe error has occurred a few times over the last two days, but not consistently enough for me to get it with DEBUG. If there is any interest I can report back here, but it seems not unique to `datasets`.", "I'm not sure what `datasets` has to do with a crash that seems related to `aws-c-io`, could it be an issue with your environment ?", "> I'm not sure what `datasets` has to do with a crash that seems related to `aws-c-io`, could it be an issue with your environment ?\r\n\r\nAgreed, this issue is not likely a bug in datasets, since I get the identical error without datasets installed.", "Will close this issue. Bug in `aws-c-io` shouldn't be in `datasets` repo. Nevertheless, it can be useful to know that it happens. Thanks @leehaust @lhoestq ", "I have also had this issue since a few days, when running scripts using PyCharm in particular, but it does not seem to affect the script from running, only reporting this error at the end of the run.", "I also get this issue, It appears after my script has finished running. I get the following error message\r\n```\r\nFatal error condition occurred in /home/conda/feedstock_root/build_artifacts/aws-c-io_1637179816120/work/source/event_loop.c:72: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS\r\nExiting Application\r\n################################################################################\r\nStack trace:\r\n################################################################################\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-c-common.so.1(aws_backtrace_print+0x59) [0x2aabe0479579]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-c-common.so.1(aws_fatal_assert+0x48) [0x2aabe04696c8]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../.././././libaws-c-io.so.1.0.0(+0x13ad3) [0x2aabe0624ad3]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-c-common.so.1(aws_ref_count_release+0x1d) [0x2aabe047b60d]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../.././././libaws-c-io.so.1.0.0(+0x113ca) [0x2aabe06223ca]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-c-common.so.1(aws_ref_count_release+0x1d) [0x2aabe047b60d]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-crt-cpp.so(_ZN3Aws3Crt2Io15ClientBootstrapD1Ev+0x3a) [0x2aabe041cf5a]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../.././libaws-cpp-sdk-core.so(+0x5f570) [0x2aabe00eb570]\r\n/lib64/libc.so.6(+0x39ce9) [0x2aaaab835ce9]\r\n/lib64/libc.so.6(+0x39d37) [0x2aaaab835d37]\r\n/lib64/libc.so.6(__libc_start_main+0xfc) [0x2aaaab81e55c]\r\npython(+0x1c721d) [0x55555571b21d]\r\nAborted\r\n```\r\nI don't get this issue when running my code in a container, and it seems more relevant to PyArrow but thought a more complete stack trace might be helpful to someone\r\n", "I created an issue on JIRA:\r\nhttps://issues.apache.org/jira/browse/ARROW-15141", "@CallumMcMahon Do you have a small reproducer for this problem on Linux? I can reproduce this on Windows but sadly not with linux." ]
1,637,584,074,000
1,639,733,245,000
1,638,224,557,000
NONE
null
## Describe the bug Fatal error when using the library ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('wikiann', 'en') ``` ## Expected results No fatal errors ## Actual results ``` Fatal error condition occurred in D:\bld\aws-c-io_1633633258269\work\source\event_loop.c:74: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS Exiting Application ``` ## Environment info - `datasets` version: 1.15.2.dev0 - Platform: Windows-10-10.0.22504-SP0 - Python version: 3.8.12 - PyArrow version: 6.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3310/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3310/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3309
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3309/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3309/comments
https://api.github.com/repos/huggingface/datasets/issues/3309/events
https://github.com/huggingface/datasets/pull/3309
1,059,496,154
PR_kwDODunzps4u0Xgm
3,309
fix: files counted twice in inferred structure
{ "login": "borisdayma", "id": 715491, "node_id": "MDQ6VXNlcjcxNTQ5MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borisdayma", "html_url": "https://github.com/borisdayma", "followers_url": "https://api.github.com/users/borisdayma/followers", "following_url": "https://api.github.com/users/borisdayma/following{/other_user}", "gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}", "starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions", "organizations_url": "https://api.github.com/users/borisdayma/orgs", "repos_url": "https://api.github.com/users/borisdayma/repos", "events_url": "https://api.github.com/users/borisdayma/events{/privacy}", "received_events_url": "https://api.github.com/users/borisdayma/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "I see it creates some errors in the tests.\r\n\r\nAnother solution if needed is to add something like `data_files = list(set(data_files))` after [this line](https://github.com/huggingface/datasets/blob/8555197a3fe826e98bd0206c2d031c4488c53c5c/src/datasets/data_files.py#L511)", "Hi ! Thanks for the correction :)\r\n\r\nYour change seems right, let me look at the errors and try to fix this", "Not sure if it's due to this change but I tested `load_dataset('dalle-mini/encoded-vqgan_imagenet_f16_16384', streaming=True)` and the `validation` set is empty.", "So indeed there was an issue with the patterns `*` and `**/*` that would return some files twice. This issue came from the fact that we were not using the right `glob`.\r\n\r\nIndeed we were using `Path.rglob` for local files and `Path.match` for remote files. Since these two methods don't have the same behavior for such patterns, I decided to change that.\r\n\r\nIn particular, we now use `glob.glob` (same as `fsspec` glob) as a reference for data files resolution from patterns. This is the same as dask for example.\r\n\r\n/!\\ Here are some behaviors specific to `glob.glob` that are different from Path.glob, Path.match or fnmatch:\r\n- '*' matches only first level files\r\n- '**/*' matches only at least second level files\r\n\r\nThis way we have a consistent behavior with respect to other python data libraries and there's no overlap anymore between the two patterns.\r\n\r\nSome implementations details:\r\n\r\nTo ensure that we have the same behavior for local files and for files in a remote dataset repository, I decided to use `fsspec` glob for both. This was made possible by implementing the `HfFileSystem` class as a `fsspec` filesystem.\r\n\r\nI pushed those changes directly to your PR - I hope you don't mind. I'm still fixing the remaining tests.\r\nPlease let me know if that solves your problem, and then we can merge !", "There's still an issue with fsspec's glob - I'll take a look this afternoon", "I just found out that actually glob.glob and fsspec glob are different haha\r\nglob.glob needs `**/*` and recursive=True to look into deep subdirectories, while fsspec only requires `**`\r\n\r\nI think we can go with fsspec glob for consistency with dask and since it's our main tool for filesystems management", "To recap:\r\n```\r\nWe use fsspec glob as a reference for data files resolution from patterns.\r\nThis is the same as dask for example.\r\n\r\n/!\\ Here are some behaviors specific to fsspec glob that are different from glob.glob, Path.glob, Path.match or fnmatch:\r\n- '*' matches only first level items\r\n- '**' matches all items\r\n- '**/*' matches all at least second level items\r\n\r\nMore generally:\r\n- `*`` matches any character except a forward-slash (to match just the file or directory name)\r\n- `**`` matches any character including a forward-slash /\r\n```", "lol Windows… Maybe `Pathlib` for the tests?\r\n\r\nI tested streaming a repo and it worked perfectly now!" ]
1,637,531,438,000
1,637,686,858,000
1,637,686,858,000
CONTRIBUTOR
null
Files were counted twice in a structure like: ``` my_dataset_local_path/ ├── README.md └── data/ ├── train/ │ ├── shard_0.csv │ ├── shard_1.csv │ ├── shard_2.csv │ └── shard_3.csv └── valid/ ├── shard_0.csv └── shard_1.csv ``` The reason is that they were matching both `*train*/*` and `*train*/**/*`. This PR fixes it. @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3309/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3309", "html_url": "https://github.com/huggingface/datasets/pull/3309", "diff_url": "https://github.com/huggingface/datasets/pull/3309.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3309.patch", "merged_at": 1637686858000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3308
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3308/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3308/comments
https://api.github.com/repos/huggingface/datasets/issues/3308/events
https://github.com/huggingface/datasets/issues/3308
1,059,255,705
I_kwDODunzps4_IvWZ
3,308
"dataset_infos.json" missing for chr_en and mc4
{ "login": "amitness", "id": 8587189, "node_id": "MDQ6VXNlcjg1ODcxODk=", "avatar_url": "https://avatars.githubusercontent.com/u/8587189?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amitness", "html_url": "https://github.com/amitness", "followers_url": "https://api.github.com/users/amitness/followers", "following_url": "https://api.github.com/users/amitness/following{/other_user}", "gists_url": "https://api.github.com/users/amitness/gists{/gist_id}", "starred_url": "https://api.github.com/users/amitness/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amitness/subscriptions", "organizations_url": "https://api.github.com/users/amitness/orgs", "repos_url": "https://api.github.com/users/amitness/repos", "events_url": "https://api.github.com/users/amitness/events{/privacy}", "received_events_url": "https://api.github.com/users/amitness/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hi ! Thanks for reporting :) \r\nWe can easily add the metadata for `chr_en` IMO, but for mC4 it will take more time, since it requires to count the number of examples in each language", "No problem. I am trying to do some analysis on the metadata of all available datasets. Is reading `metadata_infos.json` for each dataset the correct way to go? \r\n\r\nI noticed that the same information is also available as special variables inside .py file of each dataset. So, I was wondering if `metadata_infos.json` has been deprecated?\r\n\r\n![image](https://user-images.githubusercontent.com/8587189/142914413-a95a1abf-6f3e-4fbe-96e5-16d3ca39c831.png)\r\n", "The `dataset_infos.json` files have more information and are made to be used to analyze the datasets without having to run/parse the python scripts. Moreover some datasets on the Hugging face don't even have a python script, and for those ones we'll make tools to generate the JSON file automatically :)" ]
1,637,453,242,000
1,642,600,532,000
null
NONE
null
## Describe the bug In the repository, every dataset has its metadata in a file called`dataset_infos.json`. But, this file is missing for two datasets: `chr_en` and `mc4`. ## Steps to reproduce the bug Check [chr_en](https://github.com/huggingface/datasets/tree/master/datasets/chr_en) and [mc4](https://github.com/huggingface/datasets/tree/master/datasets/mc4)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3308/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3307
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3307/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3307/comments
https://api.github.com/repos/huggingface/datasets/issues/3307/events
https://github.com/huggingface/datasets/pull/3307
1,059,226,297
PR_kwDODunzps4uzlWa
3,307
Add IndoNLI dataset
{ "login": "afaji", "id": 6201626, "node_id": "MDQ6VXNlcjYyMDE2MjY=", "avatar_url": "https://avatars.githubusercontent.com/u/6201626?v=4", "gravatar_id": "", "url": "https://api.github.com/users/afaji", "html_url": "https://github.com/afaji", "followers_url": "https://api.github.com/users/afaji/followers", "following_url": "https://api.github.com/users/afaji/following{/other_user}", "gists_url": "https://api.github.com/users/afaji/gists{/gist_id}", "starred_url": "https://api.github.com/users/afaji/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/afaji/subscriptions", "organizations_url": "https://api.github.com/users/afaji/orgs", "repos_url": "https://api.github.com/users/afaji/repos", "events_url": "https://api.github.com/users/afaji/events{/privacy}", "received_events_url": "https://api.github.com/users/afaji/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "@lhoestq thanks for the review! I've modified the labels to follow other NLI datasets.\r\nPlease review my change and let me know if I miss anything." ]
1,637,441,163,000
1,637,851,908,000
1,637,851,908,000
CONTRIBUTOR
null
This PR adds IndoNLI dataset, from https://aclanthology.org/2021.emnlp-main.821/
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3307/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3307", "html_url": "https://github.com/huggingface/datasets/pull/3307", "diff_url": "https://github.com/huggingface/datasets/pull/3307.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3307.patch", "merged_at": 1637851908000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3306
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3306/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3306/comments
https://api.github.com/repos/huggingface/datasets/issues/3306/events
https://github.com/huggingface/datasets/issues/3306
1,059,185,860
I_kwDODunzps4_IeTE
3,306
nested sequence feature won't encode example if the first item of the outside sequence is an empty list
{ "login": "function2-llx", "id": 38486514, "node_id": "MDQ6VXNlcjM4NDg2NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/38486514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/function2-llx", "html_url": "https://github.com/function2-llx", "followers_url": "https://api.github.com/users/function2-llx/followers", "following_url": "https://api.github.com/users/function2-llx/following{/other_user}", "gists_url": "https://api.github.com/users/function2-llx/gists{/gist_id}", "starred_url": "https://api.github.com/users/function2-llx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/function2-llx/subscriptions", "organizations_url": "https://api.github.com/users/function2-llx/orgs", "repos_url": "https://api.github.com/users/function2-llx/repos", "events_url": "https://api.github.com/users/function2-llx/events{/privacy}", "received_events_url": "https://api.github.com/users/function2-llx/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "knock knock", "Hi, thanks for reporting! I've linked a PR that should fix the issue.", "I've checked the PR and it looks great, thanks a lot!" ]
1,637,427,474,000
1,638,968,535,000
1,638,968,535,000
NONE
null
## Describe the bug As the title, nested sequence feature won't encode example if the first item of the outside sequence is an empty list. ## Steps to reproduce the bug ```python from datasets import Features, Sequence, ClassLabel features = Features({ 'x': Sequence(Sequence(ClassLabel(names=['a', 'b']))), }) print(features.encode_batch({ 'x': [ [['a'], ['b']], [[], ['b']], ] })) ``` ## Expected results print `{'x': [[[0], [1]], [[], ['1']]]}` ## Actual results print `{'x': [[[0], [1]], [[], ['b']]]}` ## Environment info - `datasets` version: 1.15.1 - Platform: Linux-5.13.0-21-generic-x86_64-with-glibc2.34 - Python version: 3.9.7 - PyArrow version: 6.0.0 ## Additional information I think the issue stems from [here](https://github.com/huggingface/datasets/blob/8555197a3fe826e98bd0206c2d031c4488c53c5c/src/datasets/features/features.py#L847-L848).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3306/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/3306/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3305
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3305/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3305/comments
https://api.github.com/repos/huggingface/datasets/issues/3305/events
https://github.com/huggingface/datasets/pull/3305
1,059,161,000
PR_kwDODunzps4uzZWv
3,305
asserts replaced with exception for ``fingerprint.py``, ``search.py``, ``arrow_writer.py`` and ``metric.py``
{ "login": "Ishan-Kumar2", "id": 46553104, "node_id": "MDQ6VXNlcjQ2NTUzMTA0", "avatar_url": "https://avatars.githubusercontent.com/u/46553104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ishan-Kumar2", "html_url": "https://github.com/Ishan-Kumar2", "followers_url": "https://api.github.com/users/Ishan-Kumar2/followers", "following_url": "https://api.github.com/users/Ishan-Kumar2/following{/other_user}", "gists_url": "https://api.github.com/users/Ishan-Kumar2/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ishan-Kumar2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ishan-Kumar2/subscriptions", "organizations_url": "https://api.github.com/users/Ishan-Kumar2/orgs", "repos_url": "https://api.github.com/users/Ishan-Kumar2/repos", "events_url": "https://api.github.com/users/Ishan-Kumar2/events{/privacy}", "received_events_url": "https://api.github.com/users/Ishan-Kumar2/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,637,419,883,000
1,637,605,472,000
1,637,600,893,000
CONTRIBUTOR
null
Addresses #3171 Fixes exception for ``fingerprint.py``, ``search.py``, ``arrow_writer.py`` and ``metric.py`` and modified tests
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3305/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3305/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3305", "html_url": "https://github.com/huggingface/datasets/pull/3305", "diff_url": "https://github.com/huggingface/datasets/pull/3305.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3305.patch", "merged_at": 1637600893000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3304
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3304/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3304/comments
https://api.github.com/repos/huggingface/datasets/issues/3304/events
https://github.com/huggingface/datasets/issues/3304
1,059,130,494
I_kwDODunzps4_IQx-
3,304
Dataset object has no attribute `to_tf_dataset`
{ "login": "RajkumarGalaxy", "id": 59993678, "node_id": "MDQ6VXNlcjU5OTkzNjc4", "avatar_url": "https://avatars.githubusercontent.com/u/59993678?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RajkumarGalaxy", "html_url": "https://github.com/RajkumarGalaxy", "followers_url": "https://api.github.com/users/RajkumarGalaxy/followers", "following_url": "https://api.github.com/users/RajkumarGalaxy/following{/other_user}", "gists_url": "https://api.github.com/users/RajkumarGalaxy/gists{/gist_id}", "starred_url": "https://api.github.com/users/RajkumarGalaxy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RajkumarGalaxy/subscriptions", "organizations_url": "https://api.github.com/users/RajkumarGalaxy/orgs", "repos_url": "https://api.github.com/users/RajkumarGalaxy/repos", "events_url": "https://api.github.com/users/RajkumarGalaxy/events{/privacy}", "received_events_url": "https://api.github.com/users/RajkumarGalaxy/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "The issue is due to the older version of transformers and datasets. It has been resolved by upgrading their versions.\r\n\r\n```\r\n# upgrade transformers and datasets to latest versions\r\n!pip install --upgrade transformers\r\n!pip install --upgrade datasets\r\n```\r\n\r\nRegards!" ]
1,637,409,839,000
1,637,478,445,000
1,637,478,445,000
NONE
null
I am following HuggingFace Course. I am at Fine-tuning a model. Link: https://huggingface.co/course/chapter3/2?fw=tf I use tokenize_function and `map` as mentioned in the course to process data. `# define a tokenize function` `def Tokenize_function(example):` ` return tokenizer(example['sentence'], truncation=True)` `# tokenize entire data` `tokenized_data = raw_data.map(Tokenize_function, batched=True)` I get Dataset object at this point. When I try converting this to a TF dataset object as mentioned in the course, it throws the following error. `# convert to TF dataset` `train_data = tokenized_data["train"].to_tf_dataset( ` ` columns = ['attention_mask', 'input_ids', 'token_type_ids'], ` ` label_cols = ['label'], ` ` shuffle = True, ` ` collate_fn = data_collator, ` ` batch_size = 8 ` `)` Output: `---------------------------------------------------------------------------` `AttributeError Traceback (most recent call last)` `/tmp/ipykernel_42/103099799.py in <module>` ` 1 # convert to TF dataset` `----> 2 train_data = tokenized_data["train"].to_tf_dataset( \` ` 3 columns = ['attention_mask', 'input_ids', 'token_type_ids'], \` ` 4 label_cols = ['label'], \` ` 5 shuffle = True, \` `AttributeError: 'Dataset' object has no attribute 'to_tf_dataset'` When I look for `dir(tokenized_data["train"])`, there is no method or attribute in the name of `to_tf_dataset`. Why do I get this error? And how to clear this? Please help me.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3304/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3303
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3303/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3303/comments
https://api.github.com/repos/huggingface/datasets/issues/3303/events
https://github.com/huggingface/datasets/issues/3303
1,059,129,732
I_kwDODunzps4_IQmE
3,303
DataCollatorWithPadding: TypeError
{ "login": "RajkumarGalaxy", "id": 59993678, "node_id": "MDQ6VXNlcjU5OTkzNjc4", "avatar_url": "https://avatars.githubusercontent.com/u/59993678?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RajkumarGalaxy", "html_url": "https://github.com/RajkumarGalaxy", "followers_url": "https://api.github.com/users/RajkumarGalaxy/followers", "following_url": "https://api.github.com/users/RajkumarGalaxy/following{/other_user}", "gists_url": "https://api.github.com/users/RajkumarGalaxy/gists{/gist_id}", "starred_url": "https://api.github.com/users/RajkumarGalaxy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RajkumarGalaxy/subscriptions", "organizations_url": "https://api.github.com/users/RajkumarGalaxy/orgs", "repos_url": "https://api.github.com/users/RajkumarGalaxy/repos", "events_url": "https://api.github.com/users/RajkumarGalaxy/events{/privacy}", "received_events_url": "https://api.github.com/users/RajkumarGalaxy/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "\r\n> \r\n> Input:\r\n> \r\n> ```\r\n> tokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\n> data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors=\"tf\")\r\n> ```\r\n> \r\n> Output:\r\n> \r\n> ```\r\n> TypeError Traceback (most recent call last)\r\n> /tmp/ipykernel_42/1563280798.py in <module>\r\n> 1 checkpoint = 'bert-base-uncased'\r\n> 2 tokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\n> ----> 3 data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors=\"pt\")\r\n> TypeError: __init__() got an unexpected keyword argument 'return_tensors'\r\n> ```\r\n> \r\n\r\nThe issue is due to the older version of transformers and datasets. It has been resolved by upgrading their versions.\r\n\r\n`# upgrade transformers and datasets to latest versions`\r\n`!pip install --upgrade transformers`\r\n`!pip install --upgrade datasets`\r\n\r\nCheers!" ]
1,637,409,595,000
1,637,478,337,000
1,637,478,337,000
NONE
null
Hi, I am following the HuggingFace course. I am now at Fine-tuning [https://huggingface.co/course/chapter3/3?fw=tf](https://huggingface.co/course/chapter3/3?fw=tf). When I set up `DataCollatorWithPadding` as following I got an error while trying to reproduce the course code in Kaggle. This error occurs with either a CPU-only-device or a GPU-device. Input: ```checkpoint = 'bert-base-uncased' tokenizer = AutoTokenizer.from_pretrained(checkpoint) data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf") ``` Output: ```--------------------------------------------------------------------------- TypeError Traceback (most recent call last) /tmp/ipykernel_42/1563280798.py in <module> 1 checkpoint = 'bert-base-uncased' 2 tokenizer = AutoTokenizer.from_pretrained(checkpoint) ----> 3 data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="pt") TypeError: __init__() got an unexpected keyword argument 'return_tensors' ``` When I call `help` method, it too confirms that there is no argument `return_tensors`. Input: ``` help(DataCollatorWithPadding.__init__) ``` Output: ``` Help on function __init__ in module transformers.data.data_collator: __init__(self, tokenizer: transformers.tokenization_utils_base.PreTrainedTokenizerBase, padding: Union[bool, str, transformers.file_utils.PaddingStrategy] = True, max_length: Union[int, NoneType] = None, pad_to_multiple_of: Union[int, NoneType] = None) -> None ``` But, the source file *[Data Collator - docs](https://huggingface.co/transformers/main_classes/data_collator.html#datacollatorwithpadding)* says that there is such an argument. By default, it returns Pytorch tensors while I need TF tensors. Where do I miss? Please help me.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3303/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3303/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3302
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3302/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3302/comments
https://api.github.com/repos/huggingface/datasets/issues/3302/events
https://github.com/huggingface/datasets/pull/3302
1,058,907,168
PR_kwDODunzps4uynjc
3,302
fix old_val typo in f-string
{ "login": "Mehdi2402", "id": 56029953, "node_id": "MDQ6VXNlcjU2MDI5OTUz", "avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mehdi2402", "html_url": "https://github.com/Mehdi2402", "followers_url": "https://api.github.com/users/Mehdi2402/followers", "following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}", "gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions", "organizations_url": "https://api.github.com/users/Mehdi2402/orgs", "repos_url": "https://api.github.com/users/Mehdi2402/repos", "events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}", "received_events_url": "https://api.github.com/users/Mehdi2402/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,637,355,068,000
1,637,878,483,000
1,637,600,659,000
CONTRIBUTOR
null
This PR is to correct a typo in #3277 that @Carlosbogo revieled in a comment. Related closed issue : #3257 Sorry about that 😅.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3302/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3302/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3302", "html_url": "https://github.com/huggingface/datasets/pull/3302", "diff_url": "https://github.com/huggingface/datasets/pull/3302.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3302.patch", "merged_at": 1637600659000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3301
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3301/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3301/comments
https://api.github.com/repos/huggingface/datasets/issues/3301/events
https://github.com/huggingface/datasets/pull/3301
1,058,718,957
PR_kwDODunzps4uyA9o
3,301
Add wikipedia tags
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,637,339,965,000
1,637,340,570,000
1,637,340,569,000
MEMBER
null
Add the missing tags to the wikipedia dataset card. I also added the missing languages code in our language codes list. This should also fix the code snippet that is presented on the Hub to load the dataset: fix https://github.com/huggingface/datasets/issues/3292
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3301/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3301", "html_url": "https://github.com/huggingface/datasets/pull/3301", "diff_url": "https://github.com/huggingface/datasets/pull/3301.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3301.patch", "merged_at": 1637340569000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3300
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3300/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3300/comments
https://api.github.com/repos/huggingface/datasets/issues/3300/events
https://github.com/huggingface/datasets/issues/3300
1,058,644,459
I_kwDODunzps4_GaHr
3,300
❓ Dataset loading script from Hugging Face Hub
{ "login": "pietrolesci", "id": 61748653, "node_id": "MDQ6VXNlcjYxNzQ4NjUz", "avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pietrolesci", "html_url": "https://github.com/pietrolesci", "followers_url": "https://api.github.com/users/pietrolesci/followers", "following_url": "https://api.github.com/users/pietrolesci/following{/other_user}", "gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}", "starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions", "organizations_url": "https://api.github.com/users/pietrolesci/orgs", "repos_url": "https://api.github.com/users/pietrolesci/repos", "events_url": "https://api.github.com/users/pietrolesci/events{/privacy}", "received_events_url": "https://api.github.com/users/pietrolesci/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hi ! In the next version of `datasets`, your train and test splits will be correctly separated (changes from #3027) if you create a dataset repository with only your CSV files.\r\n\r\nAlso it seems that you overwrite the `data_files` and `data_dir` arguments in your code, when you instantiate the AGNewsConfig objects. Those parameters are not necessary since you already know which files you want to load.\r\n\r\nYou can find an example on how to specify which file the dataset has to download in this [example script](https://huggingface.co/datasets/lhoestq/custom_squad/blob/main/custom_squad.py#L101-L107):\r\n```python\r\n_URLS = {\r\n \"train\": \"train-v1.1.json\", # you can use a URL or a relative path from the python script to your file in the repository\r\n \"dev\": \"dev-v1.1.json\",\r\n}\r\n```\r\n```python\r\n def _split_generators(self, dl_manager):\r\n downloaded_files = dl_manager.download_and_extract(_URLS)\r\n\r\n return [\r\n datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={\"filepath\": downloaded_files[\"train\"]}),\r\n datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={\"filepath\": downloaded_files[\"dev\"]}),\r\n ]\r\n```", "Also I think the viewer will be updated when you fix the dataset script, let me know if it doesn't", "Hi @lhoestq,\r\n\r\nThanks a lot for the super quick answer!\r\n\r\nYour suggestion solves my issue. I am now able to load the dataset properly 🚀 \r\nHowever, the dataviewer is not working yet.\r\n\r\nReally, thanks a lot for your help and consideration!\r\n\r\nBest,\r\nPietro", "Great ! We'll take a look at the viewer to fix it", "@lhoestq I think I am having a related problem.\r\nMy call to load_dataset() looks like this:\r\n\r\n```\r\n datasets = load_dataset(\r\n os.path.abspath(layoutlmft.data.datasets.xfun.__file__),\r\n f\"xfun.{data_args.lang}\",\r\n additional_langs=data_args.additional_langs,\r\n keep_in_memory=True,\r\n )\r\n\r\n```\r\n\r\nMy _split_generation code is:\r\n\r\n```\r\n def _split_generators(self, dl_manager):\r\n \"\"\"Returns SplitGenerators.\"\"\"\r\n\r\n downloaded_file = dl_manager.download_and_extract(\"https://guillaumejaume.github.io/FUNSD/dataset.zip\")\r\n return [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN, gen_kwargs={\"filepath\": f\"{downloaded_file}/dataset/training_data/\"}\r\n ),\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TEST, gen_kwargs={\"filepath\": f\"{downloaded_file}/dataset/testing_data/\"}\r\n ),\r\n ]\r\n\r\n```\r\nHowever I get the error \"TypeError: _generate_examples() got an unexpected keyword argument 'filepath'\"\r\nThe path looks right and I see the data in the path so I think the only problem I have is that it doesn't like the key \"filepath\". However, the documentation (example [here](https://huggingface.co/datasets/lhoestq/custom_squad/blob/main/custom_squad.py#L101-L107)) seems to show that this is the correct parameter. \r\n\r\nHere is the full stack trace:\r\n\r\n```\r\nDownloading and preparing dataset xfun/xfun.en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /Users/caseygre/.cache/huggingface/datasets/xfun/xfun.en/0.0.0/96b8cb7c57f6f822f0ab37ae3be7b82d84ac57062e774c9361ccf0a4b9ef61cc...\r\nTraceback (most recent call last):\r\n File \"/Users/caseygre/PycharmProjects/aegis-ml-new/unilm/venv-LayoutLM/lib/python3.9/site-packages/datasets/builder.py\", line 574, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/Users/caseygre/PycharmProjects/aegis-ml-new/unilm/venv-LayoutLM/lib/python3.9/site-packages/datasets/builder.py\", line 652, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/Users/caseygre/PycharmProjects/aegis-ml-new/unilm/venv-LayoutLM/lib/python3.9/site-packages/datasets/builder.py\", line 975, in _prepare_split\r\n generator = self._generate_examples(**split_generator.gen_kwargs)\r\nTypeError: _generate_examples() got an unexpected keyword argument 'filepath'\r\npython-BaseException\r\n```", "Hi ! The `gen_kwargs` dictionary is passed to `_generate_examples`, so in your case it must be defined this way:\r\n```python\r\ndef _generate_examples(self, filepath):\r\n ...\r\n```\r\n\r\nAnd here is an additional tip: you can use `os.path.join(downloaded_file, \"dataset/testing_data\")` instead of `f\"downloaded_file}/dataset/testing_data/\"` to get compatibility with Windows and streaming.\r\n\r\nIndeed Windows uses a backslash separator, not a slash, and streaming uses chained URLs (like `zip://dataset/testing_data::https://https://guillaumejaume.github.io/FUNSD/dataset.zip` for example)", "Thanks for you quick reply @lhoestq and so sorry for my very delayed response.\r\nWe have gotten around the error another way but I will try to duplicate this when I can. We may have had \"filepaths\" instead of \"filepath\" in our def of _generate_examples() and not noticed the difference. If I find a more useful answer for others I will add to this ticket so they know what the issue was.\r\nNote: we do have our own _generate_examples() defined with the same def as Quentin has. (But one version does have \"filepaths\".)\r\n", "Fixed in the viewer: https://huggingface.co/datasets/pietrolesci/ag_news" ]
1,637,335,252,000
1,640,170,676,000
1,640,170,676,000
NONE
null
Hi there, I am trying to add my custom `ag_news` with its own loading script on the Hugging Face datasets hub. In particular, I would like to test the addition of a second configuration to the existing `ag_news` dataset. Once it works in my hub, I plan to make a PR to the original dataset. However, in trying to do so I have encountered certain problems as detailed below. Issues I have encountered: - Without a loading script, the train and test files are loaded together into a unique `dataset.Dataset` -> so I wrote a loading script. Also, I need a loading script otherwise I cannot specify multiple configurations - Once my loading script is working locally, I do not manage to make it work on the hub. In particular, I would like to be able to load the dataset like this ```python load_dataset("pietrolesci/ag_news", name="my_configuration") ``` Apparently, the `load_dataset` is able to pick up the loading script from the hub and run it. However, it errors because it is unable to find the files. The structure of my hub repo is the following ``` ag_news.py train.csv test.csv ``` and the loading script I specify `data_dir=Path(__file__).parent` and `data_files=DataFilesDict({"train": "train.csv", "test": "test.csv"})`. In the documentation I could not find info regarding loading a dataset from the hub using a loading script present on the hub. Any suggestion is very much appreciated. Best, Pietro Link to the hub repo: https://huggingface.co/datasets/pietrolesci/ag_news BONUS: how can I make the data viewer work in this specific case? :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3300/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3300/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3299
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3299/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3299/comments
https://api.github.com/repos/huggingface/datasets/issues/3299/events
https://github.com/huggingface/datasets/issues/3299
1,058,518,213
I_kwDODunzps4_F7TF
3,299
Add option to find unique elements in nested sequences when calling `Dataset.unique`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,637,327,766,000
1,637,327,766,000
null
CONTRIBUTOR
null
It would be nice to have an option to flatten nested sequences to find unique elements stored in them when calling `Dataset.unique`. Currently, `Dataset.unique` only supports finding unique sequences and not unique elements in that situation.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3299/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3298
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3298/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3298/comments
https://api.github.com/repos/huggingface/datasets/issues/3298/events
https://github.com/huggingface/datasets/issues/3298
1,058,420,201
I_kwDODunzps4_FjXp
3,298
Agnews dataset viewer is not working
{ "login": "pietrolesci", "id": 61748653, "node_id": "MDQ6VXNlcjYxNzQ4NjUz", "avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pietrolesci", "html_url": "https://github.com/pietrolesci", "followers_url": "https://api.github.com/users/pietrolesci/followers", "following_url": "https://api.github.com/users/pietrolesci/following{/other_user}", "gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}", "starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions", "organizations_url": "https://api.github.com/users/pietrolesci/orgs", "repos_url": "https://api.github.com/users/pietrolesci/repos", "events_url": "https://api.github.com/users/pietrolesci/events{/privacy}", "received_events_url": "https://api.github.com/users/pietrolesci/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hi ! Thanks for reporting\r\nWe've already fixed the code that generates the preview for this dataset, we'll release the fix soon :)", "Hi @lhoestq, thanks for your feedback!", "Fixed in the viewer.\r\n\r\nhttps://huggingface.co/datasets/ag_news" ]
1,637,320,739,000
1,640,103,845,000
1,640,103,845,000
NONE
null
## Dataset viewer issue for '*name of the dataset*' **Link:** https://huggingface.co/datasets/ag_news Hi there, the `ag_news` dataset viewer is not working. Am I the one who added this dataset? No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3298/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3298/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3297
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3297/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3297/comments
https://api.github.com/repos/huggingface/datasets/issues/3297/events
https://github.com/huggingface/datasets/issues/3297
1,058,263,859
I_kwDODunzps4_E9Mz
3,297
.map() cache is wrongfully reused - only happens when the mapping function is imported
{ "login": "eladsegal", "id": 13485709, "node_id": "MDQ6VXNlcjEzNDg1NzA5", "avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eladsegal", "html_url": "https://github.com/eladsegal", "followers_url": "https://api.github.com/users/eladsegal/followers", "following_url": "https://api.github.com/users/eladsegal/following{/other_user}", "gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}", "starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions", "organizations_url": "https://api.github.com/users/eladsegal/orgs", "repos_url": "https://api.github.com/users/eladsegal/repos", "events_url": "https://api.github.com/users/eladsegal/events{/privacy}", "received_events_url": "https://api.github.com/users/eladsegal/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hi ! Thanks for reporting. Indeed this is a current limitation of the usage we have of `dill` in `datasets`. I'd suggest you use your workaround for now until we find a way to fix this. Maybe functions that are not coming from a module not installed with pip should be dumped completely, rather than only taking their locations into account", "I agree. Sounds like a solution for it would be pretty dirty, even [cloudpickle](https://stackoverflow.com/a/16891169) doesn't help in this case.\r\nIn the meanwhile I think that adding a warning and the workaround somewhere in the documentation can be helpful." ]
1,637,309,916,000
1,638,834,340,000
null
CONTRIBUTOR
null
## Describe the bug When `.map` is used with a mapping function that is imported, the cache is reused even if the mapping function has been modified. The reason for this is that `dill` that is used for creating the fingerprint [pickles imported functions by reference](https://stackoverflow.com/a/67851411). I guess it is not a widespread case, but it can still lead to unwanted results unnoticeably. ## Steps to reproduce the bug Create files `a.py` and `b.py`: ```python # a.py from datasets import load_dataset def main(): squad = load_dataset("squad") squad.map(mapping_func, batched=True) def mapping_func(examples): ID_LENGTH = 4 examples["id"] = [id_[:ID_LENGTH] for id_ in examples["id"]] return examples if __name__ == "__main__": main() ``` ```python # b.py from datasets import load_dataset from a import mapping_func def main(): squad = load_dataset("squad") squad.map(mapping_func, batched=True) if __name__ == "__main__": main() ``` Run `python b.py` twice: In the first run you will see tqdm bars showing that the data is processed, and in the second run you will see "Loading cached processed dataset at...". Now change `ID_LENGTH` to another number in order to change the mapping function, and run `python b.py` again. You'll see that `.map` loads from the cache the result of the previous mapping function. ## Expected results Run `python a.py` twice: In the first run you will see tqdm bars showing that the data is processed, and in the second run you will see "Loading cached processed dataset at...". Now change `ID_LENGTH` to another number in order to change the mapping function, and run `python a.py` again. You'll see that the dataset is being processed and that there's no reuse of the previous mapping function result. ## Workaround Put the mapping function inside a dummy class as a static method: ```python # a.py class MappingFuncClass: @staticmethod def mapping_func(examples): ID_LENGTH = 4 examples["id"] = [id_[:ID_LENGTH] for id_ in examples["id"]] return examples ``` ```python # b.py from datasets import load_dataset from a import MappingFuncClass def main(): squad = load_dataset("squad") squad.map(MappingFuncClass.mapping_func, batched=True) if __name__ == "__main__": main() ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3297/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3296
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3296/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3296/comments
https://api.github.com/repos/huggingface/datasets/issues/3296/events
https://github.com/huggingface/datasets/pull/3296
1,057,970,638
PR_kwDODunzps4uvlQz
3,296
Fix temporary dataset_path creation for URIs related to remote fs
{ "login": "francisco-perez-sorrosal", "id": 918006, "node_id": "MDQ6VXNlcjkxODAwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4", "gravatar_id": "", "url": "https://api.github.com/users/francisco-perez-sorrosal", "html_url": "https://github.com/francisco-perez-sorrosal", "followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers", "following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}", "gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}", "starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions", "organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs", "repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos", "events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}", "received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hi ! Thanks for the fix :) \r\n\r\nI think this should be `extract_path_from_uri` 's responsibility to strip the extra `/` from a badly formatted path like `hdfs:///absolute/path` (or raise an error). Do you think you can simply do the changes in `extract_path_from_uri` ? This way this fix will be available for all the other parts of the lib that need to extract the inner path from an URI of a remote filesystem\r\n\r\nThen we can also keep your test cases but simply apply them to `extract_path_from_uri` instead", "Hi @lhoestq! No problem! Thanks for your interest! :)\r\n\r\nI think stripping the 3rd `/` in `hdfs:///absolute/path` inside `extract_path_from_uri` is not the solution. When I provide `hdfs:///absolute/path` to `extract_path_from_uri` we want `/absolute/path` to be returned, as it does now (at least in the case of URIs with `hdfs` schemas, for `s3` is different as it should start with a bucket name).\r\n\r\nThe problem comes in line 1041 in the original code below:\r\n\r\nhttps://github.com/huggingface/datasets/blob/42f6b1d18a4a1b6009b6e62d115491be16dfca22/src/datasets/arrow_dataset.py#L1038-L1042\r\n\r\nLets assume the following parameters for line 1041 after `extract_path_from_uri` has removed the `hdfs` schema part and the `://` from `hdfs:///absolute/path`, and `get_temporary_cache_files_directory()` returns `/tmp/a1b2b3c4`, as it is shown below: \r\n\r\n```python\r\nsrc_dataset_path = '/absolute/path'\r\ntmp_dir = '/tmp/a1b2b3c4'\r\ndataset_path = Path(tmp_dir, src_dataset_path)\r\n```\r\n\r\nAfter passing those paths to the `Path` object, `dataset_path` contains only `/absolute/path`; that is, it has lost the temporary directory path. This is because, when two (or more) absolute paths are passed to the `Path` function, only the last one is taken. However, if the contents of those variables are:\r\n\r\n```python\r\nsrc_dataset_path = 'relative/path'\r\ntmp_dir = '/tmp/a1b2b3c4'\r\ndataset_path = Path(tmp_dir, src_dataset_path)\r\n```\r\n\r\nthen `dataset_path` contains `/tmp/a1b2b3c4/relative/path` as expected.\r\n\r\nAbsolute paths are allowed in hdfs URIs, so that's why I added the extra function `build_local_temp_path` in the PR; so in case the second argument is an absolute path, it still will create the correct absolute path by concatenating the temp dir and the path passed by converting it to a relative path (and it also works for windows paths too.) It also allows to add the tests, checking that the main combinations are ok.\r\n\r\nI've checked all the places where the result of `extract_path_from_uri` is used, and as far as I've seen this is the only place where it is concatenated with another possible absolute path, so no need to add `build_local_temp_path` anywhere else. \r\n" ]
1,637,278,365,000
1,638,787,504,000
1,638,787,504,000
CONTRIBUTOR
null
This aims to close #3295
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3296/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3296", "html_url": "https://github.com/huggingface/datasets/pull/3296", "diff_url": "https://github.com/huggingface/datasets/pull/3296.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3296.patch", "merged_at": 1638787503000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3295
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3295/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3295/comments
https://api.github.com/repos/huggingface/datasets/issues/3295/events
https://github.com/huggingface/datasets/issues/3295
1,057,954,892
I_kwDODunzps4_DxxM
3,295
Temporary dataset_path for remote fs URIs not built properly in arrow_dataset.py::load_from_disk
{ "login": "francisco-perez-sorrosal", "id": 918006, "node_id": "MDQ6VXNlcjkxODAwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4", "gravatar_id": "", "url": "https://api.github.com/users/francisco-perez-sorrosal", "html_url": "https://github.com/francisco-perez-sorrosal", "followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers", "following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}", "gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}", "starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions", "organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs", "repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos", "events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}", "received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hi ! Good catch and thanks for opening a PR :)\r\n\r\nI just responded in your PR" ]
1,637,277,842,000
1,638,787,504,000
1,638,787,504,000
CONTRIBUTOR
null
## Describe the bug When trying to build a temporary dataset path from a remote URI in this block of code: https://github.com/huggingface/datasets/blob/42f6b1d18a4a1b6009b6e62d115491be16dfca22/src/datasets/arrow_dataset.py#L1038-L1042 the result is not the expected when passing an absolute path in an URI like `hdfs:///absolute/path`. ## Steps to reproduce the bug ```python dataset_path = "hdfs:///absolute/path" src_dataset_path = extract_path_from_uri(dataset_path) tmp_dir = get_temporary_cache_files_directory() dataset_path = Path(tmp_dir, src_dataset_path) print(dataset_path) ``` ## Expected results With the code above, we would expect a value in `dataset_path` similar to: `/tmp/tmpnwxyvao5/absolute/path` ## Actual results However, we get a `dataset_path` value like: `/absolute/path` This is because this line here: https://github.com/huggingface/datasets/blob/42f6b1d18a4a1b6009b6e62d115491be16dfca22/src/datasets/arrow_dataset.py#L1041 returns the last absolute path when two absolute paths (the one in `tmp_dir` and the one extracted from the URI in `src_dataset_path`) are passed as arguments. ## Environment info - `datasets` version: 1.13.3 - Platform: Linux-3.10.0-1160.15.2.el7.x86_64-x86_64-with-glibc2.33 - Python version: 3.9.7 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3295/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3294
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3294/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3294/comments
https://api.github.com/repos/huggingface/datasets/issues/3294/events
https://github.com/huggingface/datasets/issues/3294
1,057,495,473
I_kwDODunzps4_CBmx
3,294
Add Natural Adversarial Objects dataset
{ "login": "osanseviero", "id": 7246357, "node_id": "MDQ6VXNlcjcyNDYzNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/osanseviero", "html_url": "https://github.com/osanseviero", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "repos_url": "https://api.github.com/users/osanseviero/repos", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 3608941089, "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision", "name": "vision", "color": "bfdadc", "default": false, "description": "Vision datasets" } ]
open
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,637,249,684,000
1,638,964,802,000
null
NONE
null
## Adding a Dataset - **Name:** Natural Adversarial Objects (NAO) - **Description:** Natural Adversarial Objects (NAO) is a new dataset to evaluate the robustness of object detection models. NAO contains 7,934 images and 9,943 objects that are unmodified and representative of real-world scenarios, but cause state-of-the-art detection models to misclassify with high confidence. - **Paper:** https://arxiv.org/abs/2111.04204v1 - **Data:** https://drive.google.com/drive/folders/15P8sOWoJku6SSEiHLEts86ORfytGezi8 - **Motivation:** interesting object detection dataset useful for miscclassifications cc @NielsRogge Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3294/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3293
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3293/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3293/comments
https://api.github.com/repos/huggingface/datasets/issues/3293/events
https://github.com/huggingface/datasets/pull/3293
1,057,004,431
PR_kwDODunzps4uslLN
3,293
Pin version exclusion for Markdown
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,637,218,561,000
1,637,231,285,000
1,637,231,284,000
MEMBER
null
As Markdown version 3.3.5 has a bug, it is better to exclude it in case the users have it previously installed in their environment. Related to #3289, #3286.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3293/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3293", "html_url": "https://github.com/huggingface/datasets/pull/3293", "diff_url": "https://github.com/huggingface/datasets/pull/3293.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3293.patch", "merged_at": 1637231284000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3292
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3292/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3292/comments
https://api.github.com/repos/huggingface/datasets/issues/3292/events
https://github.com/huggingface/datasets/issues/3292
1,056,962,554
I_kwDODunzps4-__f6
3,292
Not able to load 'wikipedia' dataset
{ "login": "abhibisht89", "id": 13541524, "node_id": "MDQ6VXNlcjEzNTQxNTI0", "avatar_url": "https://avatars.githubusercontent.com/u/13541524?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhibisht89", "html_url": "https://github.com/abhibisht89", "followers_url": "https://api.github.com/users/abhibisht89/followers", "following_url": "https://api.github.com/users/abhibisht89/following{/other_user}", "gists_url": "https://api.github.com/users/abhibisht89/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhibisht89/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhibisht89/subscriptions", "organizations_url": "https://api.github.com/users/abhibisht89/orgs", "repos_url": "https://api.github.com/users/abhibisht89/repos", "events_url": "https://api.github.com/users/abhibisht89/events{/privacy}", "received_events_url": "https://api.github.com/users/abhibisht89/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hi ! Indeed it looks like the code snippet on the Hugging face Hub doesn't show the second parameter\r\n\r\n![image](https://user-images.githubusercontent.com/42851186/142649237-45ba55c5-1a64-4c30-8692-2c8120572f92.png)\r\n\r\nThanks for reporting, I'm taking a look\r\n" ]
1,637,214,078,000
1,637,340,569,000
1,637,340,569,000
NONE
null
## Describe the bug I am following the instruction for loading the wikipedia dataset using datasets. However getting the below error. ## Steps to reproduce the bug from datasets import load_dataset dataset = load_dataset("wikipedia") ``` ## Expected results A clear and concise description of the expected results. ## Actual results ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs) 339 "Config name is missing." 340 "\nPlease pick one among the available configs: %s" % list(self.builder_configs.keys()) --> 341 + "\nExample of usage:\n\t`{}`".format(example_of_usage) 342 ) 343 builder_config = self.BUILDER_CONFIGS[0] ValueError: Config name is missing. Please pick one among the available configs: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu'] Example of usage: `load_dataset('wikipedia', '20200501.aa')` I think the other parameter is missing in the load_dataset function that is not shown in the instruction.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3292/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3291
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3291/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3291/comments
https://api.github.com/repos/huggingface/datasets/issues/3291/events
https://github.com/huggingface/datasets/pull/3291
1,056,689,876
PR_kwDODunzps4urikR
3,291
Use f-strings in the dataset scripts
{ "login": "Carlosbogo", "id": 84228424, "node_id": "MDQ6VXNlcjg0MjI4NDI0", "avatar_url": "https://avatars.githubusercontent.com/u/84228424?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Carlosbogo", "html_url": "https://github.com/Carlosbogo", "followers_url": "https://api.github.com/users/Carlosbogo/followers", "following_url": "https://api.github.com/users/Carlosbogo/following{/other_user}", "gists_url": "https://api.github.com/users/Carlosbogo/gists{/gist_id}", "starred_url": "https://api.github.com/users/Carlosbogo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Carlosbogo/subscriptions", "organizations_url": "https://api.github.com/users/Carlosbogo/orgs", "repos_url": "https://api.github.com/users/Carlosbogo/repos", "events_url": "https://api.github.com/users/Carlosbogo/events{/privacy}", "received_events_url": "https://api.github.com/users/Carlosbogo/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,637,187,619,000
1,637,599,216,000
1,637,599,216,000
CONTRIBUTOR
null
Uses f-strings to format the .py files in the dataset folder
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3291/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3291", "html_url": "https://github.com/huggingface/datasets/pull/3291", "diff_url": "https://github.com/huggingface/datasets/pull/3291.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3291.patch", "merged_at": 1637599216000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3290
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3290/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3290/comments
https://api.github.com/repos/huggingface/datasets/issues/3290/events
https://github.com/huggingface/datasets/pull/3290
1,056,414,856
PR_kwDODunzps4uqzcv
3,290
Make several audio datasets streamable
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Reading FLAC (for `librispeech_asr`) works OK for me (`soundfile` version: `0.10.3`):\r\n```python\r\nIn [2]: ds = load_dataset(\"datasets/librispeech_asr/librispeech_asr.py\", \"clean\", streaming=True, split=\"train.100\")\r\n\r\nIn [3]: item = next(iter(ds))\r\n\r\nIn [4]: item.keys()\r\nOut[4]: dict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'])\r\n\r\nIn [5]: item[\"file\"]\r\nOut[5]: '374-180298-0000.flac'\r\n\r\nIn [6]: item[\"audio\"].keys()\r\nOut[6]: dict_keys(['path', 'array', 'sampling_rate'])\r\n\r\nIn [7]: item[\"audio\"][\"sampling_rate\"]\r\nOut[7]: 16000\r\n\r\nIn [8]: item[\"audio\"][\"path\"]\r\nOut[8]: '374-180298-0000.flac'\r\n\r\nIn [9]: item[\"audio\"][\"array\"].shape\r\nOut[9]: (232480,)\r\n```", "Oh cool ! I think this might have come from an issue with my local `soundfile` installation then", "I'll do `multilingual_librispeech` in a separate PR since it requires the data to be in another format (in particular separate the train/dev/test splits in different files)" ]
1,637,171,021,000
1,637,334,538,000
1,637,334,537,000
MEMBER
null
<s>Needs https://github.com/huggingface/datasets/pull/3129 to be merged first</s> Make those audio datasets streamable: - [x] common_voice - [x] openslr - [x] vivos - [x] librispeech_asr <s>(still has some issues to read FLAC)</s> *actually it's ok* - [ ] <s>multilingual_librispeech (yet to be converted)</S> *TODO in a separate PR*
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3290/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3290/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3290", "html_url": "https://github.com/huggingface/datasets/pull/3290", "diff_url": "https://github.com/huggingface/datasets/pull/3290.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3290.patch", "merged_at": 1637334537000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3289
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3289/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3289/comments
https://api.github.com/repos/huggingface/datasets/issues/3289/events
https://github.com/huggingface/datasets/pull/3289
1,056,323,715
PR_kwDODunzps4uqf79
3,289
Unpin markdown for build_docs now that it's fixed
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,637,166,173,000
1,637,166,189,000
1,637,166,188,000
MEMBER
null
`markdown`'s bug has been fixed, so this PR reverts #3286
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3289/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3289/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3289", "html_url": "https://github.com/huggingface/datasets/pull/3289", "diff_url": "https://github.com/huggingface/datasets/pull/3289.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3289.patch", "merged_at": 1637166188000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3288
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3288/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3288/comments
https://api.github.com/repos/huggingface/datasets/issues/3288/events
https://github.com/huggingface/datasets/pull/3288
1,056,145,703
PR_kwDODunzps4up6S5
3,288
Allow datasets with indices table when concatenating along axis=1
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,637,156,488,000
1,637,163,672,000
1,637,163,671,000
CONTRIBUTOR
null
Calls `flatten_indices` on the datasets with indices table in `concatenate_datasets` to fix issues when concatenating along `axis=1`. cc @lhoestq: I decided to flatten all the datasets instead of flattening all the datasets except the largest one in the end. The latter approach fails on the following example: ```python a = Dataset.from_dict({"a": [10, 20, 30, 40]}) b = Dataset.from_dict({"b": [10, 20, 30, 40, 50, 60]}) # largest dataset a = a.select([1, 2, 3]) b = b.select([1, 2, 3]) concatenate_datasets([a, b], axis=1) # fails at line concat_tables(...) because the real length of b's data is 6 and a's length is 3 after flattening (was 4 before flattening) ``` Also, it requires additional re-ordering of indices to prepare them for working with the indices table of the largest dataset. IMO not worth when we save only one `flatten_indices` call. (feel free to check the code of that approach at https://github.com/huggingface/datasets/commit/6acd10481c70950dcfdbfd2bab0bf0c74ad80bcb if you are interested) Fixes #3273
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3288/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3288/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3288", "html_url": "https://github.com/huggingface/datasets/pull/3288", "diff_url": "https://github.com/huggingface/datasets/pull/3288.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3288.patch", "merged_at": 1637163671000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3287
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3287/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3287/comments
https://api.github.com/repos/huggingface/datasets/issues/3287/events
https://github.com/huggingface/datasets/pull/3287
1,056,079,724
PR_kwDODunzps4upsWR
3,287
Add The Pile dataset and PubMed Central subset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,637,152,558,000
1,638,372,548,000
1,638,372,547,000
MEMBER
null
Add: - The complete final version of The Pile dataset: "all" config - PubMed Central subset of The Pile: "pubmed_central" config Close #1675, close bigscience-workshop/data_tooling#74. CC: @StellaAthena, @lewtun
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3287/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 4, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3287/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3287", "html_url": "https://github.com/huggingface/datasets/pull/3287", "diff_url": "https://github.com/huggingface/datasets/pull/3287.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3287.patch", "merged_at": 1638372546000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3286
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3286/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3286/comments
https://api.github.com/repos/huggingface/datasets/issues/3286/events
https://github.com/huggingface/datasets/pull/3286
1,056,008,586
PR_kwDODunzps4updTK
3,286
Fix build_docs CI
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,637,147,936,000
1,637,147,960,000
1,637,147,959,000
MEMBER
null
Because of https://github.com/Python-Markdown/markdown/issues/1196 we have to temporarily pin `markdown` to 3.3.4 for the docs to build without issues
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3286/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3286", "html_url": "https://github.com/huggingface/datasets/pull/3286", "diff_url": "https://github.com/huggingface/datasets/pull/3286.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3286.patch", "merged_at": 1637147959000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3285
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3285/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3285/comments
https://api.github.com/repos/huggingface/datasets/issues/3285/events
https://github.com/huggingface/datasets/issues/3285
1,055,506,730
I_kwDODunzps4-6cEq
3,285
Add IEMOCAP dataset
{ "login": "osanseviero", "id": 7246357, "node_id": "MDQ6VXNlcjcyNDYzNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/osanseviero", "html_url": "https://github.com/osanseviero", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "repos_url": "https://api.github.com/users/osanseviero/repos", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech", "name": "speech", "color": "d93f0b", "default": false, "description": "" }, { "id": 3608941089, "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision", "name": "vision", "color": "bfdadc", "default": false, "description": "Vision datasets" } ]
open
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,637,102,840,000
1,638,964,664,000
null
NONE
null
## Adding a Dataset - **Name:** IEMOCAP - **Description:** acted, multimodal and multispeaker database - **Paper:** https://sail.usc.edu/iemocap/Busso_2008_iemocap.pdf - **Data:** https://sail.usc.edu/iemocap/index.html - **Motivation:** Useful multimodal dataset cc @anton-l Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3285/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3284
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3284/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3284/comments
https://api.github.com/repos/huggingface/datasets/issues/3284/events
https://github.com/huggingface/datasets/issues/3284
1,055,502,909
I_kwDODunzps4-6bI9
3,284
Add VoxLingua107 dataset
{ "login": "osanseviero", "id": 7246357, "node_id": "MDQ6VXNlcjcyNDYzNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/osanseviero", "html_url": "https://github.com/osanseviero", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "repos_url": "https://api.github.com/users/osanseviero/repos", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech", "name": "speech", "color": "d93f0b", "default": false, "description": "" } ]
open
false
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false } ]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "#self-assign" ]
1,637,102,648,000
1,638,784,185,000
null
NONE
null
## Adding a Dataset - **Name:** VoxLingua107 - **Description:** VoxLingua107 is a speech dataset for training spoken language identification models. The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives. - **Paper:** https://arxiv.org/abs/2011.12998 - **Data:** http://bark.phon.ioc.ee/voxlingua107/ - **Motivation:** Nice audio classification dataset cc @anton-l Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3284/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3283
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3283/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3283/comments
https://api.github.com/repos/huggingface/datasets/issues/3283/events
https://github.com/huggingface/datasets/issues/3283
1,055,495,874
I_kwDODunzps4-6ZbC
3,283
Add Speech Commands dataset
{ "login": "osanseviero", "id": 7246357, "node_id": "MDQ6VXNlcjcyNDYzNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/osanseviero", "html_url": "https://github.com/osanseviero", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "repos_url": "https://api.github.com/users/osanseviero/repos", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech", "name": "speech", "color": "d93f0b", "default": false, "description": "" } ]
closed
false
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false } ]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "#self-assign" ]
1,637,102,396,000
1,639,132,215,000
1,639,132,215,000
NONE
null
## Adding a Dataset - **Name:** Speech commands - **Description:** A Dataset for Limited-Vocabulary Speech Recognition - **Paper:** https://arxiv.org/abs/1804.03209 - **Data:** https://www.tensorflow.org/datasets/catalog/speech_commands, Available: http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz - **Motivation:** Nice dataset for audio classification training cc @anton-l Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3283/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3282
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3282/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3282/comments
https://api.github.com/repos/huggingface/datasets/issues/3282/events
https://github.com/huggingface/datasets/issues/3282
1,055,054,898
I_kwDODunzps4-4twy
3,282
ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py
{ "login": "MinionAttack", "id": 10078549, "node_id": "MDQ6VXNlcjEwMDc4NTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/10078549?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MinionAttack", "html_url": "https://github.com/MinionAttack", "followers_url": "https://api.github.com/users/MinionAttack/followers", "following_url": "https://api.github.com/users/MinionAttack/following{/other_user}", "gists_url": "https://api.github.com/users/MinionAttack/gists{/gist_id}", "starred_url": "https://api.github.com/users/MinionAttack/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MinionAttack/subscriptions", "organizations_url": "https://api.github.com/users/MinionAttack/orgs", "repos_url": "https://api.github.com/users/MinionAttack/repos", "events_url": "https://api.github.com/users/MinionAttack/events{/privacy}", "received_events_url": "https://api.github.com/users/MinionAttack/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
open
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hi ! Thanks for reporting :)\r\nI think this is because the dataset is behind an access page. We can fix the dataset viewer\r\n\r\nIf you also have this error when you use the `datasets` library in python, you should probably pass `use_auth_token=True` to the `load_dataset()` function to use your account to access the dataset.", "Ah ok, I didn't realise about the login page. I'll try `use_auth_token=True` and see if that solves it.\r\n\r\nRegards!", "Hi, \r\n\r\nUsing `use_auth_token=True` and downloading the credentials with `huggingface-cli login` (stored in .huggingface/token) solved the issue.\r\n\r\nShould I leave the issue open until you fix the Dataset viewer issue?", "Cool ! Yes let's keep this issue open until the viewer is fixed - I'll close it when this is fixed. Thanks", "The error I get when trying to load OSCAR 21.09 is this\r\n```\r\nConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py\r\n```\r\n\r\nThe URL I get in the browser is this\r\n```\r\nhttps://huggingface.co/datasets/oscar-corpus/OSCAR-2109/blob/main/OSCAR-2109.py\r\n```\r\n\r\nMaybe URL is the issue? (resolve vs blob)", "> The error I get when trying to load OSCAR 21.09 is this\r\n> \r\n> ```\r\n> ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py\r\n> ```\r\n> \r\n> The URL I get in the browser is this\r\n> \r\n> ```\r\n> https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/blob/main/OSCAR-2109.py\r\n> ```\r\n> \r\n> Maybe URL is the issue? (resolve vs blob)\r\n\r\nYou need to download your login credentials. See `huggingface-cli login` documentation and when loading the dataset use `use_auth_token=True`:\r\n`\r\nload_dataset(corpus, language, split=None, use_auth_token=True, cache_dir=cache_folder)`" ]
1,637,078,719,000
1,638,173,849,000
null
NONE
null
## Dataset viewer issue for '*oscar-corpus/OSCAR-2109*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)* *The dataset library cannot download any language from the oscar-corpus/OSCAR-2109 dataset. By entering the URL in your browser I can access the file.* ``` raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py ``` Am I the one who added this dataset ? No Using the older version of [OSCAR](https://huggingface.co/datasets/oscar) I don't have any issues downloading languages with the dataset library.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3282/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3282/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3281
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3281/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3281/comments
https://api.github.com/repos/huggingface/datasets/issues/3281/events
https://github.com/huggingface/datasets/pull/3281
1,055,018,876
PR_kwDODunzps4umWZE
3,281
[Datasets] Improve Covost 2
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "I am trying to use `load_dataset` with the French dataset(common voice corpus 1) which is downloaded from a common voice site and the target language is English (using colab)\r\n\r\nSteps I have followed:\r\n\r\n**1. untar:**\r\n`!tar xvzf fr.tar -C data_dir`\r\n\r\n**2. load data:**\r\n`load_dataset('covost2', 'fr_en', data_dir=\"/content/data_dir\")`\r\n\r\n0 rows are loading as shown below:\r\n```\r\nUsing custom data configuration fr_en-data_dir=%2Fcontent%2Fdata_dir\r\nReusing dataset covost2 (/root/.cache/huggingface/datasets/covost2/fr_en-data_dir=%2Fcontent%2Fdata_dir/1.0.0/bba950aae1ffa5a14b876b7e09c17b44de2c3cf60e7bd5d459640beffc78e35b)\r\n100%\r\n3/3 [00:00<00:00, 54.98it/s]\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['client_id', 'file', 'audio', 'sentence', 'translation', 'id'],\r\n num_rows: 0\r\n })\r\n validation: Dataset({\r\n features: ['client_id', 'file', 'audio', 'sentence', 'translation', 'id'],\r\n num_rows: 0\r\n })\r\n test: Dataset({\r\n features: ['client_id', 'file', 'audio', 'sentence', 'translation', 'id'],\r\n num_rows: 0\r\n })\r\n})\r\n```\r\n\r\nCan you please provide a sample working example code to load the dataset?", "Hi ! I think it only works with the subsets of Common Voice Corpus 4, not Common Voice Corpus 1" ]
1,637,076,739,000
1,643,213,826,000
1,637,232,244,000
MEMBER
null
It's currently quite confusing to understand the manual data download instruction of Covost and not very user-friendly. Currenty the user has to: 1. Go on Common Voice website 2. Find the correct dataset which is **not** mentioned in the error message 3. Download it 4. Untar it 5. Create a language id folder (why? this folder does not exist in the `.tar` downloaded file) 6. pass the folder containing the created language id folder This PR improves this to: 1. Go on Common Voice website 2. Find the correct dataset which **is** mentioned in the error message 3. Download it 4. Untar it 5. pass the untared folder **Note**: This PR is not at all time-critical
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3281/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3281/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3281", "html_url": "https://github.com/huggingface/datasets/pull/3281", "diff_url": "https://github.com/huggingface/datasets/pull/3281.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3281.patch", "merged_at": 1637232244000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3280
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3280/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3280/comments
https://api.github.com/repos/huggingface/datasets/issues/3280/events
https://github.com/huggingface/datasets/pull/3280
1,054,766,828
PR_kwDODunzps4ulgye
3,280
Fix bookcorpusopen RAM usage
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,637,062,072,000
1,637,164,408,000
1,637,069,670,000
MEMBER
null
Each document is a full book, so the default arrow writer batch size of 10,000 is too big, and it can fill up RAM quickly before flushing the first batch on disk. I changed its batch size to 256 to use maximum 100MB of memory Fix #3167.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3280/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3280/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3280", "html_url": "https://github.com/huggingface/datasets/pull/3280", "diff_url": "https://github.com/huggingface/datasets/pull/3280.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3280.patch", "merged_at": 1637069670000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3279
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3279/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3279/comments
https://api.github.com/repos/huggingface/datasets/issues/3279/events
https://github.com/huggingface/datasets/pull/3279
1,054,711,852
PR_kwDODunzps4ulVHe
3,279
Minor Typo Fix - Precision to Recall
{ "login": "SebastinSanty", "id": 13795788, "node_id": "MDQ6VXNlcjEzNzk1Nzg4", "avatar_url": "https://avatars.githubusercontent.com/u/13795788?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SebastinSanty", "html_url": "https://github.com/SebastinSanty", "followers_url": "https://api.github.com/users/SebastinSanty/followers", "following_url": "https://api.github.com/users/SebastinSanty/following{/other_user}", "gists_url": "https://api.github.com/users/SebastinSanty/gists{/gist_id}", "starred_url": "https://api.github.com/users/SebastinSanty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SebastinSanty/subscriptions", "organizations_url": "https://api.github.com/users/SebastinSanty/orgs", "repos_url": "https://api.github.com/users/SebastinSanty/repos", "events_url": "https://api.github.com/users/SebastinSanty/events{/privacy}", "received_events_url": "https://api.github.com/users/SebastinSanty/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,637,058,742,000
1,637,061,483,000
1,637,061,482,000
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3279/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3279", "html_url": "https://github.com/huggingface/datasets/pull/3279", "diff_url": "https://github.com/huggingface/datasets/pull/3279.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3279.patch", "merged_at": 1637061482000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3278
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3278/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3278/comments
https://api.github.com/repos/huggingface/datasets/issues/3278/events
https://github.com/huggingface/datasets/pull/3278
1,054,249,463
PR_kwDODunzps4uj2EQ
3,278
Proposed update to the documentation for WER
{ "login": "wooters", "id": 2111202, "node_id": "MDQ6VXNlcjIxMTEyMDI=", "avatar_url": "https://avatars.githubusercontent.com/u/2111202?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wooters", "html_url": "https://github.com/wooters", "followers_url": "https://api.github.com/users/wooters/followers", "following_url": "https://api.github.com/users/wooters/following{/other_user}", "gists_url": "https://api.github.com/users/wooters/gists{/gist_id}", "starred_url": "https://api.github.com/users/wooters/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wooters/subscriptions", "organizations_url": "https://api.github.com/users/wooters/orgs", "repos_url": "https://api.github.com/users/wooters/repos", "events_url": "https://api.github.com/users/wooters/events{/privacy}", "received_events_url": "https://api.github.com/users/wooters/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,637,018,911,000
1,637,061,577,000
1,637,061,577,000
CONTRIBUTOR
null
I wanted to submit a minor update to the description of WER for your consideration. Because of the possibility of insertions, the numerator in the WER formula can be larger than N, so the value of WER can be greater than 1.0: ``` >>> from datasets import load_metric >>> metric = load_metric("wer") >>> metric.compute(predictions=["hello how are you"], references=["hello"]) 3.0 ``` and similarly from the underlying jiwer module's `wer` function: ``` >>> from jiwer import wer >>> wer("hello", "hello how are you") 3.0 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3278/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3278/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3278", "html_url": "https://github.com/huggingface/datasets/pull/3278", "diff_url": "https://github.com/huggingface/datasets/pull/3278.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3278.patch", "merged_at": 1637061577000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3277
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3277/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3277/comments
https://api.github.com/repos/huggingface/datasets/issues/3277/events
https://github.com/huggingface/datasets/pull/3277
1,054,122,656
PR_kwDODunzps4ujk11
3,277
f-string formatting
{ "login": "Mehdi2402", "id": 56029953, "node_id": "MDQ6VXNlcjU2MDI5OTUz", "avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mehdi2402", "html_url": "https://github.com/Mehdi2402", "followers_url": "https://api.github.com/users/Mehdi2402/followers", "following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}", "gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions", "organizations_url": "https://api.github.com/users/Mehdi2402/orgs", "repos_url": "https://api.github.com/users/Mehdi2402/repos", "events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}", "received_events_url": "https://api.github.com/users/Mehdi2402/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hello @lhoestq, ```make style``` is applied as asked. :)" ]
1,637,012,225,000
1,637,354,408,000
1,637,165,918,000
CONTRIBUTOR
null
**Fix #3257** Replaced _.format()_ and _%_ by f-strings in the following modules : - [x] **tests** - [x] **metrics** - [x] **benchmarks** - [x] **utils** - [x] **templates** - [x] **src/Datasets/\*.py** Modules in **_src/Datasets/_**: - [x] **commands** - [x] **features** - [x] **formatting** - [x] **io** - [x] **tasks** - [x] **utils** Module **datasets** will not be edited as asked by @mariosasko -A correction of the first PR (#3267)-
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3277/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3277/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3277", "html_url": "https://github.com/huggingface/datasets/pull/3277", "diff_url": "https://github.com/huggingface/datasets/pull/3277.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3277.patch", "merged_at": 1637165918000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3276
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3276/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3276/comments
https://api.github.com/repos/huggingface/datasets/issues/3276/events
https://github.com/huggingface/datasets/pull/3276
1,053,793,063
PR_kwDODunzps4uihih
3,276
Update KILT metadata JSON
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,636,989,925,000
1,637,061,719,000
1,637,061,718,000
MEMBER
null
Fix #3265.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3276/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3276/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3276", "html_url": "https://github.com/huggingface/datasets/pull/3276", "diff_url": "https://github.com/huggingface/datasets/pull/3276.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3276.patch", "merged_at": 1637061718000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3275/comments
https://api.github.com/repos/huggingface/datasets/issues/3275/events
https://github.com/huggingface/datasets/pull/3275
1,053,698,898
PR_kwDODunzps4uiN9t
3,275
Force data files extraction if download_mode='force_redownload'
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,636,984,824,000
1,636,987,523,000
1,636,987,523,000
CONTRIBUTOR
null
Avoids weird issues when redownloading a dataset due to cached data not being fully updated. With this change, issues #3122 and https://github.com/huggingface/datasets/issues/2956 (not a fix, but a workaround) can be fixed as follows: ```python dset = load_dataset(..., download_mode="force_redownload") ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3275/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3275/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3275", "html_url": "https://github.com/huggingface/datasets/pull/3275", "diff_url": "https://github.com/huggingface/datasets/pull/3275.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3275.patch", "merged_at": 1636987523000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3274
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3274/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3274/comments
https://api.github.com/repos/huggingface/datasets/issues/3274/events
https://github.com/huggingface/datasets/pull/3274
1,053,689,140
PR_kwDODunzps4uiL8-
3,274
Fix some contact information formats
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "The CI fail are caused by some missing sections or tags, which is unrelated to this PR. Merging !" ]
1,636,984,234,000
1,636,987,435,000
1,636,987,434,000
MEMBER
null
As reported in https://github.com/huggingface/datasets/issues/3188 some contact information are not displayed correctly. This PR fixes this for CoNLL-2002 and some other datasets with the same issue
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3274/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3274/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3274", "html_url": "https://github.com/huggingface/datasets/pull/3274", "diff_url": "https://github.com/huggingface/datasets/pull/3274.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3274.patch", "merged_at": 1636987434000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3273
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3273/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3273/comments
https://api.github.com/repos/huggingface/datasets/issues/3273/events
https://github.com/huggingface/datasets/issues/3273
1,053,554,038
I_kwDODunzps4-y_V2
3,273
Respect row ordering when concatenating datasets along axis=1
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,636,975,634,000
1,637,163,671,000
1,637,163,671,000
CONTRIBUTOR
null
Currently, there is a bug when concatenating datasets along `axis=1` if more than one dataset has the `_indices` attribute defined. In that scenario, all indices mappings except the first one get ignored. A minimal reproducible example: ```python >>> from datasets import Dataset, concatenate_datasets >>> a = Dataset.from_dict({"a": [30, 20, 10]}) >>> b = Dataset.from_dict({"b": [2, 1, 3]}) >>> d = concatenate_datasets([a.sort("a"), b.sort("b")], axis=1) >>> print(d[:3]) # expected: {'a': [10, 20, 30], 'b': [1, 2, 3]} {'a': [10, 20, 30], 'b': [3, 1, 2]} ``` I've noticed the bug while working on #3195.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3273/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3273/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3272
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3272/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3272/comments
https://api.github.com/repos/huggingface/datasets/issues/3272/events
https://github.com/huggingface/datasets/issues/3272
1,053,516,479
I_kwDODunzps4-y2K_
3,272
Make iter_archive work with ZIP files
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "Mehdi2402", "id": 56029953, "node_id": "MDQ6VXNlcjU2MDI5OTUz", "avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mehdi2402", "html_url": "https://github.com/Mehdi2402", "followers_url": "https://api.github.com/users/Mehdi2402/followers", "following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}", "gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions", "organizations_url": "https://api.github.com/users/Mehdi2402/orgs", "repos_url": "https://api.github.com/users/Mehdi2402/repos", "events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}", "received_events_url": "https://api.github.com/users/Mehdi2402/received_events", "type": "User", "site_admin": false }
[ { "login": "Mehdi2402", "id": 56029953, "node_id": "MDQ6VXNlcjU2MDI5OTUz", "avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mehdi2402", "html_url": "https://github.com/Mehdi2402", "followers_url": "https://api.github.com/users/Mehdi2402/followers", "following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}", "gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions", "organizations_url": "https://api.github.com/users/Mehdi2402/orgs", "repos_url": "https://api.github.com/users/Mehdi2402/repos", "events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}", "received_events_url": "https://api.github.com/users/Mehdi2402/received_events", "type": "User", "site_admin": false } ]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hello, is this issue open for any contributor ? can I work on it ?\r\n\r\n", "Hi ! Sure this is open for any contributor. If you're interested feel free to self-assign this issue to you by commenting `#self-assign`. Then if you have any question or if I can help, feel free to ping me.\r\n\r\nTo begin with, feel free to take a look at both implementations of `iter_archive` for local downloads and for data streaming:\r\n\r\nIn the `DownloadManager` for local dowloads:\r\nhttps://github.com/huggingface/datasets/blob/dfa334bd8dc6cbc854b170379c7d2cb7e3d3fe4f/src/datasets/utils/download_manager.py#L218-L242\r\n\r\nIn the `StreamingDownloadManager` to stream the content of the archive directly from the remote file:\r\nhttps://github.com/huggingface/datasets/blob/dfa334bd8dc6cbc854b170379c7d2cb7e3d3fe4f/src/datasets/utils/streaming_download_manager.py#L502-L526\r\n\r\nNotice the call to `xopen` that opens and streams a file given either an URL or a local path :)", "Okay thank you for the information. I will work on this :) ", "#self-assign" ]
1,636,973,442,000
1,637,798,927,000
null
MEMBER
null
Currently users can use `dl_manager.iter_archive` in their dataset script to iterate over all the files of a TAR archive. It would be nice if it could work with ZIP files too !
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3272/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3272/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3271
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3271/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3271/comments
https://api.github.com/repos/huggingface/datasets/issues/3271/events
https://github.com/huggingface/datasets/pull/3271
1,053,482,919
PR_kwDODunzps4uhgi1
3,271
Decode audio from remote
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,636,971,956,000
1,637,062,558,000
1,637,062,558,000
MEMBER
null
Currently the Audio feature type can only decode local audio files, not remote files. To fix this I replaced `open` with our `xopen` functoin that is compatible with remote files in audio.py cc @albertvillanova @mariosasko
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3271/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3271", "html_url": "https://github.com/huggingface/datasets/pull/3271", "diff_url": "https://github.com/huggingface/datasets/pull/3271.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3271.patch", "merged_at": 1637062558000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3270
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3270/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3270/comments
https://api.github.com/repos/huggingface/datasets/issues/3270/events
https://github.com/huggingface/datasets/pull/3270
1,053,465,662
PR_kwDODunzps4uhcxm
3,270
Add os.listdir for streaming
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,636,971,244,000
1,636,972,023,000
1,636,972,023,000
MEMBER
null
Extend `os.listdir` to support streaming data from remote files. This is often used to navigate in remote ZIP files for example
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3270/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3270", "html_url": "https://github.com/huggingface/datasets/pull/3270", "diff_url": "https://github.com/huggingface/datasets/pull/3270.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3270.patch", "merged_at": 1636972022000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3269
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3269/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3269/comments
https://api.github.com/repos/huggingface/datasets/issues/3269/events
https://github.com/huggingface/datasets/issues/3269
1,053,218,769
I_kwDODunzps4-xtfR
3,269
coqa NonMatchingChecksumError
{ "login": "ZhaofengWu", "id": 11954789, "node_id": "MDQ6VXNlcjExOTU0Nzg5", "avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZhaofengWu", "html_url": "https://github.com/ZhaofengWu", "followers_url": "https://api.github.com/users/ZhaofengWu/followers", "following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}", "gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions", "organizations_url": "https://api.github.com/users/ZhaofengWu/orgs", "repos_url": "https://api.github.com/users/ZhaofengWu/repos", "events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}", "received_events_url": "https://api.github.com/users/ZhaofengWu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hi @ZhaofengWu, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce your bug:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"coqa\")\r\nDownloading: 3.82kB [00:00, 1.91MB/s]\r\nDownloading: 1.79kB [00:00, 1.79MB/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset coqa/default (download: 55.40 MiB, generated: 18.35 MiB, post-processed: Unknown size, total: 73.75 MiB) to .cache\\coqa\\default\\1.0.0\\553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0...\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 49.0M/49.0M [00:06<00:00, 7.17MB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9.09M/9.09M [00:01<00:00, 6.08MB/s]\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:12<00:00, 6.48s/it]\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 333.26it/s]\r\nDataset coqa downloaded and prepared to .cache\\coqa\\default\\1.0.0\\553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 285.49it/s]\r\n\r\nIn [3]: ds\r\nOut[3]:\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['source', 'story', 'questions', 'answers'],\r\n num_rows: 7199\r\n })\r\n validation: Dataset({\r\n features: ['source', 'story', 'questions', 'answers'],\r\n num_rows: 500\r\n })\r\n})\r\n```\r\n\r\nCould you please give more details about your development environment? You can run the command `datasets-cli env` and copy-and-paste its output:\r\n```\r\n- `datasets` version:\r\n- Platform:\r\n- Python version:\r\n- PyArrow version:\r\n```\r\nIt might be because you are using an old version of `datasets`. Could you please update it (`pip install -U datasets`) and confirm if the problem parsists? ", "I'm getting the same error in two separate environments:\r\n```\r\n- `datasets` version: 1.15.1\r\n- Platform: Linux-5.4.0-84-generic-x86_64-with-debian-bullseye-sid\r\n- Python version: 3.7.11\r\n- PyArrow version: 6.0.0\r\n```\r\n\r\n```\r\n- `datasets` version: 1.15.1\r\n- Platform: macOS-10.16-x86_64-i386-64bit\r\n- Python version: 3.9.5\r\n- PyArrow version: 6.0.0\r\n```", "I'm sorry, but don't get to reproduce the error in the Linux environment.\r\n\r\n@mariosasko @lhoestq can you reproduce it?", "I also can't reproduce the error on Windows/Linux (tested both the master and the `1.15.1` version). ", "Maybe the file had issues during the download ? Could you try to delete your cache and try again ?\r\nBy default the downloads cache is at `~/.cache/huggingface/datasets/downloads`\r\n\r\nAlso can you check if you have a proxy that could prevent the download to succeed ? Are you able to download those files via your browser ?", "I got the same error in a third environment (google cloud) as well. The internet for these three environments are all different so I don't think that's the reason.\r\n```\r\n- `datasets` version: 1.12.1\r\n- Platform: Linux-5.11.0-1022-gcp-x86_64-with-glibc2.31\r\n- Python version: 3.9.7\r\n- PyArrow version: 6.0.0\r\n```\r\nI deleted the entire `~/.cache/huggingface/datasets` on my local mac, and got a different first time error.\r\n```\r\nPython 3.9.5 (default, May 18 2021, 12:31:01) \r\n[Clang 10.0.0 ] :: Anaconda, Inc. on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset(\"coqa\")\r\nDownloading: 3.82kB [00:00, 1.19MB/s] \r\nDownloading: 1.79kB [00:00, 712kB/s] \r\nUsing custom data configuration default\r\nDownloading and preparing dataset coqa/default (download: 55.40 MiB, generated: 18.35 MiB, post-processed: Unknown size, total: 73.75 MiB) to /Users/zhaofengw/.cache/huggingface/datasets/coqa/default/1.0.0/553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0...\r\nDownloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 222/222 [00:00<00:00, 1.36MB/s]\r\n 50%|████████████████████████████████████████████████████████████████████████████████████████████████████████████▌ | 1/2 [00:00<00:00, 2.47it/s]Traceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/load.py\", line 1632, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py\", line 607, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py\", line 675, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/Users/zhaofengw/.cache/huggingface/modules/datasets_modules/datasets/coqa/553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0/coqa.py\", line 70, in _split_generators\r\n downloaded_files = dl_manager.download_and_extract(urls_to_download)\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 284, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 196, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 216, in map_nested\r\n mapped = [\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 217, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True))\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 152, in _single_map_nested\r\n return function(data_struct)\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 217, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 295, in cached_path\r\n output_path = get_from_cache(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 594, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json\r\n>>> dataset = load_dataset(\"coqa\")\r\nUsing custom data configuration default\r\nDownloading and preparing dataset coqa/default (download: 55.40 MiB, generated: 18.35 MiB, post-processed: Unknown size, total: 73.75 MiB) to /Users/zhaofengw/.cache/huggingface/datasets/coqa/default/1.0.0/553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0...\r\nDownloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 222/222 [00:00<00:00, 1.38MB/s]\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 6.26it/s]\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 1087.45it/s]\r\n 50%|████████████████████████████████████████████████████████████████████████████████████████████████████████████▌ | 1/2 [00:45<00:45, 45.60s/it]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/load.py\", line 1632, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py\", line 607, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py\", line 679, in _download_and_prepare\r\n verify_checksums(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/info_utils.py\", line 40, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://nlp.stanford.edu/data/coqa/coqa-train-v1.0.json', 'https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json']\r\n```\r\nI can access the URL using my browser, though I did notice a redirection -- could that have something to do with it?", "Hi @ZhaofengWu, \r\n\r\nWhat about in Google Colab? Can you run this notebook without errors? \r\nhttps://colab.research.google.com/drive/1CCpiiHmtNlfO_4CZ3-fW-TSShr1M0rL4?usp=sharing", "I can run your notebook fine, but if I create one myself, it has that error: https://colab.research.google.com/drive/107GIdhrauPO6ZiFDY7G9S74in4qqI2Kx?usp=sharing.\r\n\r\nIt's so funny -- it's like whenever you guys run it it's fine but whenever I run it it fails, whatever the environment is.", "I guess it must be some connection issue: the data owner may be blocking requests coming from your country or IP range...", "I mean, I don't think google colab sends the connection from my IP. Same applies to google cloud.", "Hello, I am having the same error with @ZhaofengWu first with \"social bias frames\" dataset. As I found this report, I tried also \"coqa\" and it fails as well. \r\n\r\nI test this on Google Colab. \r\n\r\n```\r\n- `datasets` version: 1.15.1\r\n- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.12\r\n- PyArrow version: 3.0.0\r\n```\r\n\r\nThen another environment\r\n\r\n```\r\n- `datasets` version: 1.15.1\r\n- Platform: macOS-12.0.1-arm64-arm-64bit\r\n- Python version: 3.9.7\r\n- PyArrow version: 6.0.1\r\n```\r\n\r\nI tried the notebook @albertvillanova provided earlier, and it fails...\r\n", "Hi, still not able to reproduce the issue with `coqa`. If you still have this issue, could you please run these additional commands ?\r\n```python\r\n>>> import os\r\n>>> from hashlib import md5\r\n>>> from datasets.utils import DownloadManager, DownloadConfig\r\n>>> path = DownloadManager(download_config=DownloadConfig(use_etag=False)).download(\"https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json\") # it returns the cached file\r\n>>> os.path.getsize(path)\r\n9090845\r\n>>> m = md5()\r\n>>> m.update(open(path, \"rb\").read())\r\n>>> m.hexdigest()\r\n`95d427588e3733e4ebec55f6938dbba6`\r\n>>> open(path).read(500)\r\n'{\\n \"version\": \"1.0\",\\n \"data\": [\\n {\\n \"source\": \"mctest\",\\n \"id\": \"3dr23u6we5exclen4th8uq9rb42tel\",\\n \"filename\": \"mc160.test.41\",\\n \"story\": \"Once upon a time, in a barn near a farm house, there lived a little white kitten named Cotton. Cotton lived high up in a nice warm place above the barn where all of the farmer\\'s horses slept. But Cotton wasn\\'t alone in her little home above the barn, oh no. She shared her hay bed with her mommy and 5 other sisters. All of her sisters w'\r\n```\r\n\r\nThis way we can know whether you downloaded a corrupted file or an error file that could cause the `NonMatchingChecksumError` error to happen", "```\r\n>>> import os\r\n>>> from hashlib import md5\r\n>>> from datasets.utils import DownloadManager, DownloadConfig\r\n>>> path = DownloadManager(download_config=DownloadConfig(use_etag=False)).download(\"https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json\") # it returns the cached file\r\n>>> os.path.getsize(path)\r\n222\r\n>>> m = md5()\r\n>>> m.update(open(path, \"rb\").read())\r\n>>> m.hexdigest()\r\n'1195812a37c01a4481a4748c85d0c6a9'\r\n>>> open(path).read(500)\r\n'<html>\\n<head><title>503 Service Temporarily Unavailable</title></head>\\n<body bgcolor=\"white\">\\n<center><h1>503 Service Temporarily Unavailable</h1></center>\\n<hr><center>nginx/1.10.3 (Ubuntu)</center>\\n</body>\\n</html>\\n'\r\n```\r\nLooks like there was a server-side error when downloading the dataset? But I don't believe this is a transient error given (a) deleting the cache and re-downloading gives the same error; (b) it happens on multiple platforms with different network configurations; (c) other people are getting this error too, see above. So I'm not sure why it works for some people but not others.", "`wget https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json` does work. So I suspect there might be some problem in `datasets`' networking code? Can you give me some snippet that simulates how `datasets` requests the resource which I can run on my end?", "There is a redirection -- I don't know if that's the cause.", "Ok This is an issue with the server that hosts the data at `https://nlp.stanford.edu/nlp/data` that randomly returns 503 (by trying several times it also happens on my side), hopefully it can be fixed soon. I'll try to reach the people in charge of hosting the data", "Thanks. Also it might help to display a more informative error message?", "You're right. I just opened a PR that would show this error if it happens again:\r\n```python\r\nConnectionError: Couldn't reach https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json (error 503)\r\n```" ]
1,636,952,647,000
1,642,600,699,000
1,642,600,699,000
NONE
null
``` >>> from datasets import load_dataset >>> dataset = load_dataset("coqa") Downloading: 3.82kB [00:00, 1.26MB/s] Downloading: 1.79kB [00:00, 733kB/s] Using custom data configuration default Downloading and preparing dataset coqa/default (download: 55.40 MiB, generated: 18.35 MiB, post-processed: Unknown size, total: 73.75 MiB) to /Users/zhaofengw/.cache/huggingface/datasets/coqa/default/1.0.0/553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0... Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 222/222 [00:00<00:00, 1.38MB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 222/222 [00:00<00:00, 1.32MB/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:01<00:00, 1.91it/s] 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 1117.44it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py", line 679, in _download_and_prepare verify_checksums( File "/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://nlp.stanford.edu/data/coqa/coqa-train-v1.0.json', 'https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json'] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3269/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3269/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3268
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3268/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3268/comments
https://api.github.com/repos/huggingface/datasets/issues/3268/events
https://github.com/huggingface/datasets/issues/3268
1,052,992,681
I_kwDODunzps4-w2Sp
3,268
Dataset viewer issue for 'liweili/c4_200m'
{ "login": "liliwei25", "id": 22389228, "node_id": "MDQ6VXNlcjIyMzg5MjI4", "avatar_url": "https://avatars.githubusercontent.com/u/22389228?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liliwei25", "html_url": "https://github.com/liliwei25", "followers_url": "https://api.github.com/users/liliwei25/followers", "following_url": "https://api.github.com/users/liliwei25/following{/other_user}", "gists_url": "https://api.github.com/users/liliwei25/gists{/gist_id}", "starred_url": "https://api.github.com/users/liliwei25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liliwei25/subscriptions", "organizations_url": "https://api.github.com/users/liliwei25/orgs", "repos_url": "https://api.github.com/users/liliwei25/repos", "events_url": "https://api.github.com/users/liliwei25/events{/privacy}", "received_events_url": "https://api.github.com/users/liliwei25/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hi ! I think the issue comes from this [line](https://huggingface.co/datasets/liweili/c4_200m/blob/main/c4_200m.py#L87):\r\n```python\r\npath = filepath + \"/*.tsv*\"\r\n```\r\n\r\nYou can fix this by doing this instead:\r\n```python\r\npath = os.path.join(filepath, \"/*.tsv*\")\r\n```\r\n\r\nHere is why:\r\n\r\nLocally you can append `\"/*.tsv*\"` to your local path, however it doesn't work in streaming mode, and the dataset viewer does use the streaming mode.\r\nIn streaming mode, the download and extract part is done lazily. It means that instead of using local paths, it's still passing around URLs and [chained URLs](https://filesystem-spec.readthedocs.io/en/latest/features.html#url-chaining)\r\n\r\nTherefore in streaming mode, `filepath` is not a local path, but instead is equal to\r\n```python\r\nzip://::https://huggingface.co/datasets/liweili/c4_200m/resolve/main/data.zip\r\n```\r\nThe `zip://` part means that we navigate inside the remote ZIP file.\r\n\r\nYou must use `os.path.join` to navigate inside it and get your TSV files:\r\n```python\r\n>>> os.path.join(filepath, \"/*.tsv*\")\r\nzip://*.tsv*::https://huggingface.co/datasets/liweili/c4_200m/resolve/main/data.zip\r\n```\r\n\r\n`datasets` extends `os.path.join`, `glob.glob`, etc. in your dataset scripts to work with remote files.", "hi @lhoestq ! thanks for the tip! i've updated the line of code but it's still not working. am i doing something else wrong? thank you!", "Hi ! Your dataset code is all good now :)\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: d = load_dataset(\"liweili/c4_200m\", streaming=True)\r\nDownloading: 100%|█████████████████████████████████████████████| 2.79k/2.79k [00:00<00:00, 4.83MB/s]\r\nUsing custom data configuration default\r\n\r\nIn [3]: next(iter(d[\"train\"]))\r\nOut[3]: \r\n{'input': 'Bitcoin is for $7,094 this morning, which CoinDesk says.',\r\n 'output': 'Bitcoin goes for $7,094 this morning, according to CoinDesk.'}\r\n```\r\nThough the viewer doesn't seem to be updated, I'll take a look at what's wrong", "thank you @lhoestq! 😄 ", "It's working\r\n\r\n<img width=\"1424\" alt=\"Capture d’écran 2021-12-21 à 11 24 29\" src=\"https://user-images.githubusercontent.com/1676121/146914238-24bf87c0-c68d-4699-8d6c-fa3065656d1d.png\">\r\n\r\n" ]
1,636,910,326,000
1,640,082,320,000
1,640,082,291,000
NONE
null
## Dataset viewer issue for '*liweili/c4_200m*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/liweili/c4_200m)* *Server Error* ``` Status code: 404 Exception: Status404Error Message: Not found. Maybe the cache is missing, or maybe the ressource does not exist. ``` Am I the one who added this dataset ? Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3268/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3268/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3267
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3267/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3267/comments
https://api.github.com/repos/huggingface/datasets/issues/3267/events
https://github.com/huggingface/datasets/pull/3267
1,052,750,084
PR_kwDODunzps4ufQzB
3,267
Replacing .format() and % by f-strings
{ "login": "Mehdi2402", "id": 56029953, "node_id": "MDQ6VXNlcjU2MDI5OTUz", "avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mehdi2402", "html_url": "https://github.com/Mehdi2402", "followers_url": "https://api.github.com/users/Mehdi2402/followers", "following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}", "gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions", "organizations_url": "https://api.github.com/users/Mehdi2402/orgs", "repos_url": "https://api.github.com/users/Mehdi2402/repos", "events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}", "received_events_url": "https://api.github.com/users/Mehdi2402/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hi ! It looks like most of your changes are just `black` changes. All those changes are not necessary. In particular if you want to use `black`, please use the `make style` command instead. It runs `black` with additional parameters and you shouldn't end up with that many changes\r\n\r\nFeel free to open a new PR that doesn't include all the unnecessary `black` changes that you have on your branch :)", "> Hi ! It looks like most of your changes are just `black` changes. All those changes are not necessary. In particular if you want to use `black`, please use the `make style` command instead. It runs `black` with additional parameters and you shouldn't end up with that many changes\r\n> \r\n> Feel free to open a new PR that doesn't include all the unnecessary `black` changes that you have on your branch :)\r\n\r\nThank you for your answer :) , I will open a new PR with the correct changes.", "Hi @lhoestq, I submitted 3 commits in a new PR (#3277) where I did not apply black.\r\n\r\nI can apply the ```make style``` command if asked.", "Cool thanks ! Yes feel free to make sure you have `black==21.4b0` and run `make style`" ]
1,636,830,722,000
1,637,096,426,000
1,637,074,543,000
CONTRIBUTOR
null
**Fix #3257** Replaced _.format()_ and _%_ by f-strings in the following modules : - [x] **tests** - [x] **metrics** - [x] **benchmarks** - [x] **utils** - [x] **templates** Will follow in the next PR the modules left : - [ ] **src** Module **datasets** will not be edited as asked by @mariosasko PS : black and isort applied to files
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3267/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3267", "html_url": "https://github.com/huggingface/datasets/pull/3267", "diff_url": "https://github.com/huggingface/datasets/pull/3267.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3267.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3266
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3266/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3266/comments
https://api.github.com/repos/huggingface/datasets/issues/3266/events
https://github.com/huggingface/datasets/pull/3266
1,052,700,155
PR_kwDODunzps4ufH94
3,266
Fix URLs for WikiAuto Manual, jeopardy and definite_pronoun_resolution
{ "login": "LashaO", "id": 28014149, "node_id": "MDQ6VXNlcjI4MDE0MTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/28014149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LashaO", "html_url": "https://github.com/LashaO", "followers_url": "https://api.github.com/users/LashaO/followers", "following_url": "https://api.github.com/users/LashaO/following{/other_user}", "gists_url": "https://api.github.com/users/LashaO/gists{/gist_id}", "starred_url": "https://api.github.com/users/LashaO/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LashaO/subscriptions", "organizations_url": "https://api.github.com/users/LashaO/orgs", "repos_url": "https://api.github.com/users/LashaO/repos", "events_url": "https://api.github.com/users/LashaO/events{/privacy}", "received_events_url": "https://api.github.com/users/LashaO/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "There seems to be problems with datasets metadata, of which I dont have access to. I think one of the datasets is from reddit. Can anyone help?", "Hello @LashaO , I think the errors were caused by `_DATA_FILES` in `definite_pronoun_resolution.py`. Here are details of the test error.\r\n```\r\nself = BuilderConfig(name='plain_text', version=1.0.0, data_dir=None, data_files={'train': 'train.c.txt', 'test': 'test.c.txt'}, description='Plain text import of the Definite Pronoun Resolution Dataset.')\r\n\r\n def __post_init__(self):\r\n # The config name is used to name the cache directory.\r\n invalid_windows_characters = r\"<>:/\\|?*\"\r\n for invalid_char in invalid_windows_characters:\r\n if invalid_char in self.name:\r\n raise InvalidConfigName(\r\n f\"Bad characters from black list '{invalid_windows_characters}' found in '{self.name}'. \"\r\n f\"They could create issues when creating a directory for this config on Windows filesystem.\"\r\n )\r\n if self.data_files is not None and not isinstance(self.data_files, DataFilesDict):\r\n> raise ValueError(f\"Expected a DataFilesDict in data_files but got {self.data_files}\")\r\nE ValueError: Expected a DataFilesDict in data_files but got {'train': 'train.c.txt', 'test': 'test.c.txt'}\r\n```", "Hi ! Thanks for the fixes :)\r\n\r\nInstead of uploading the `definite_pronoun_resolution` data files in this PR, maybe we can just update the URL ?\r\nThe old url was http://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt, but now it's https://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt (https instead of http)", "Actually the bad certificate creates an issue with the download\r\n```python\r\nimport datasets \r\ndatasets.DownloadManager().download(\"https://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt\")\r\n# raises: ConnectionError: Couldn't reach https://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt\r\n```\r\n\r\nLet me see if I can fix that", "I uploaded them to these URLs, feel free to use them instead of having the text files here in the PR :)\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/definite_pronoun_resolution/train.c.txt\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/definite_pronoun_resolution/test.c.txt", "Thank you for the tips! Having a busy week so anyone willing to commit the suggestions is welcome. Else, I will try to get back to this in a while.", "@LashaO Thanks for working on this. Yes, I'll take over as we already have a request to fix the URL of the Jeopardy! dataset in a separate issue.", "~~Still have to fix the error in the dummy data test of the WikiAuto dataset (so please don't merge).~~ Done! Ready for merging.", "Thank you, Mario!", "The CI failure is only related to missing tags in the dataset cards, merging :)" ]
1,636,815,694,000
1,638,789,391,000
1,638,789,391,000
CONTRIBUTOR
null
[#3264](https://github.com/huggingface/datasets/issues/3264)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3266/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3266/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3266", "html_url": "https://github.com/huggingface/datasets/pull/3266", "diff_url": "https://github.com/huggingface/datasets/pull/3266.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3266.patch", "merged_at": 1638789391000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3265
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3265/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3265/comments
https://api.github.com/repos/huggingface/datasets/issues/3265/events
https://github.com/huggingface/datasets/issues/3265
1,052,666,558
I_kwDODunzps4-vmq-
3,265
Checksum error for kilt_task_wow
{ "login": "slyviacassell", "id": 22296717, "node_id": "MDQ6VXNlcjIyMjk2NzE3", "avatar_url": "https://avatars.githubusercontent.com/u/22296717?v=4", "gravatar_id": "", "url": "https://api.github.com/users/slyviacassell", "html_url": "https://github.com/slyviacassell", "followers_url": "https://api.github.com/users/slyviacassell/followers", "following_url": "https://api.github.com/users/slyviacassell/following{/other_user}", "gists_url": "https://api.github.com/users/slyviacassell/gists{/gist_id}", "starred_url": "https://api.github.com/users/slyviacassell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/slyviacassell/subscriptions", "organizations_url": "https://api.github.com/users/slyviacassell/orgs", "repos_url": "https://api.github.com/users/slyviacassell/repos", "events_url": "https://api.github.com/users/slyviacassell/events{/privacy}", "received_events_url": "https://api.github.com/users/slyviacassell/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Using `dataset = load_dataset(\"kilt_tasks\", \"wow\", ignore_verifications=True)` may fix it, but I do not think it is a elegant solution.", "Hi @slyviacassell, thanks for reporting.\r\n\r\nYes, there is an issue with the checksum verification. I'm fixing it.\r\n\r\nAnd as you pointed out, in the meantime, you can circumvent the problem by passing `ignore_verifications=True`. " ]
1,636,805,057,000
1,637,061,833,000
1,637,061,718,000
NONE
null
## Describe the bug Checksum failed when downloads kilt_tasks_wow. See error output for details. ## Steps to reproduce the bug ```python import datasets datasets.load_datasets('kilt_tasks','wow') ``` ## Expected results Download successful ## Actual results ``` Downloading and preparing dataset kilt_tasks/wow (download: 72.07 MiB, generated: 61.82 MiB, post-processed: Unknown size, total: 133.89 MiB) to /root/.cache/huggingface/datasets/kilt_tasks/wow/1.0.0/57dc8b2431e76637e0c6ef79689ca4af61ed3a330e2e0cd62c8971465a35db3a... 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 5121.25it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1527.42it/s] Traceback (most recent call last): File "kilt_wow.py", line 30, in <module> main() File "kilt_wow.py", line 27, in main train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/") File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 79, in generate_k_shot_data dataset = self.load_dataset() File "kilt_wow.py", line 21, in load_dataset return datasets.load_dataset('kilt_tasks','wow') File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 679, in _download_and_prepare verify_checksums( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['http://dl.fbaipublicfiles.com/KILT/wow-train-kilt.jsonl', 'http://dl.fbaipublicfiles.com/KILT/wow-dev-kilt.jsonl'] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: Linux-4.15.0-161-generic-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3265/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3264
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3264/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3264/comments
https://api.github.com/repos/huggingface/datasets/issues/3264/events
https://github.com/huggingface/datasets/issues/3264
1,052,663,513
I_kwDODunzps4-vl7Z
3,264
Downloading URL change for WikiAuto Manual, jeopardy and definite_pronoun_resolution
{ "login": "slyviacassell", "id": 22296717, "node_id": "MDQ6VXNlcjIyMjk2NzE3", "avatar_url": "https://avatars.githubusercontent.com/u/22296717?v=4", "gravatar_id": "", "url": "https://api.github.com/users/slyviacassell", "html_url": "https://github.com/slyviacassell", "followers_url": "https://api.github.com/users/slyviacassell/followers", "following_url": "https://api.github.com/users/slyviacassell/following{/other_user}", "gists_url": "https://api.github.com/users/slyviacassell/gists{/gist_id}", "starred_url": "https://api.github.com/users/slyviacassell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/slyviacassell/subscriptions", "organizations_url": "https://api.github.com/users/slyviacassell/orgs", "repos_url": "https://api.github.com/users/slyviacassell/repos", "events_url": "https://api.github.com/users/slyviacassell/events{/privacy}", "received_events_url": "https://api.github.com/users/slyviacassell/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "#take\r\nI am willing to fix this. Links can be replaced for WikiAuto Manual and jeopardy with new ones provided by authors.\r\n\r\nAs for the definite_pronoun_resolution URL, a certificate error seems to be preventing a download. I have the files on my local machine. I can include them in the dataset folder as the files are <1MB in size total.", "> #take I am willing to fix this. Links can be replaced for WikiAuto Manual and jeopardy.\r\n> \r\n> As for the definite_pronoun_resolution URL, a certificate error seems to be preventing a download. I have the files on my local machine. Anyone has opinions on whether it is preferable for me to host them somewhere (e.g. personal GDrive account) or upload them to the dataset folder directly and use github raw URLs? The files are <1MB in size.\r\n\r\nI am planning to fix it next few days. But my to-do list is full and I do not have the cache of definite_pronoun_resolution. I am glad that you can take this. Thanks a lot!", "No problem, buddy! Will submit a PR over this weekend." ]
1,636,804,032,000
1,636,810,761,000
null
NONE
null
## Describe the bug - WikiAuto Manual The original manual datasets with the following downloading URL in this [repository](https://github.com/chaojiang06/wiki-auto) was [deleted](https://github.com/chaojiang06/wiki-auto/commit/0af9b066f2b4e02726fb8a9be49283c0ad25367f) by the author. ``` https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/train.tsv ``` - jeopardy The downloading URL for jeopardy may move from ``` http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz ``` to ``` https://drive.google.com/file/d/0BwT5wj_P7BKXb2hfM3d2RHU1ckE/view?resourcekey=0-1abK4cJq-mqxFoSg86ieIg ``` - definite_pronoun_resolution The following downloading URL for definite_pronoun_resolution cannot be reached for some reasons. ``` http://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt ``` ## Steps to reproduce the bug ```python import datasets datasets.load_datasets('wiki_auto','manual') datasets.load_datasets('jeopardy') datasets.load_datasets('definite_pronoun_resolution') ``` ## Expected results Download successfully ## Actual results - WikiAuto Manual ``` Downloading and preparing dataset wiki_auto/manual (download: 151.65 MiB, generated: 155.97 MiB, post-processed: Unknown size, total: 307.61 MiB) to /root/.cache/huggingface/datasets/wiki_auto/manual/1.0.0/5ffdd9fc62422d29bd02675fb9606f77c1251ee17169ac10b143ce07ef2f4db8... 0%| | 0/3 [00:00<?, ?it/s]Traceback (most recent call last): File "wiki_auto.py", line 43, in <module> main() File "wiki_auto.py", line 40, in main train, dev, test = dataset.generate_k_shot_data(k=16, seed=seed, path="../data/") File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 24, in generate_k_shot_data dataset = self.load_dataset() File "wiki_auto.py", line 34, in load_dataset return datasets.load_dataset('wiki_auto', 'manual') File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wiki_auto/5ffdd9fc62422d29bd02675fb9606f77c1251ee17169ac10b143ce07ef2f4db8/wiki_auto.py", line 193, in _split_generators data_dir = dl_manager.download_and_extract(my_urls) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download downloaded_path_or_paths = map_nested( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 216, in map_nested mapped = [ File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 217, in <listcomp> _single_map_nested((function, obj, types, None, True)) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 152, in _single_map_nested return function(data_struct) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path output_path = get_from_cache( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 592, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/train.tsv ``` - jeopardy ``` Using custom data configuration default Downloading and preparing dataset jeopardy/default (download: 12.13 MiB, generated: 34.46 MiB, post-processed: Unknown size, total: 46.59 MiB) to /root/.cache/huggingface/datasets/jeopardy/default/0.1.0/25ee3e4a73755e637b8810f6493fd36e4523dea3ca8a540529d0a6e24c7f9810... Traceback (most recent call last): File "jeopardy.py", line 45, in <module> main() File "jeopardy.py", line 42, in main train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/") File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 79, in generate_k_shot_data dataset = self.load_dataset() File "jeopardy.py", line 36, in load_dataset return datasets.load_dataset("jeopardy") File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/jeopardy/25ee3e4a73755e637b8810f6493fd36e4523dea3ca8a540529d0a6e24c7f9810/jeopardy.py", line 72, in _split_generators filepath = dl_manager.download_and_extract(_DATA_URL) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download downloaded_path_or_paths = map_nested( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 206, in map_nested return function(data_struct) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path output_path = get_from_cache( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz ``` - definite_pronoun_resolution ``` Downloading and preparing dataset definite_pronoun_resolution/plain_text (download: 222.12 KiB, generated: 239.12 KiB, post-processed: Unknown size, total: 461.24 KiB) to /root/.cache/huggingface/datasets/definite_pronoun_resolution/plain_text/1.0.0/35a1dfd4fba4afb8ba226cbbb65ac7cef0dd3cf9302d8f803740f05d2f16ceff... 0%| | 0/2 [00:00<?, ?it/s]Traceback (most recent call last): File "definite_pronoun_resolution.py", line 37, in <module> main() File "definite_pronoun_resolution.py", line 34, in main train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/") File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 79, in generate_k_shot_data dataset = self.load_dataset() File "definite_pronoun_resolution.py", line 28, in load_dataset return datasets.load_dataset('definite_pronoun_resolution') File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/definite_pronoun_resolution/35a1dfd4fba4afb8ba226cbbb65ac7cef0dd3cf9302d8f803740f05d2f16ceff/definite_pronoun_resolution.py", line 76, in _split_generators files = dl_manager.download_and_extract( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download downloaded_path_or_paths = map_nested( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 216, in map_nested mapped = [ File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 217, in <listcomp> _single_map_nested((function, obj, types, None, True)) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 152, in _single_map_nested return function(data_struct) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path output_path = get_from_cache( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: Linux-4.15.0-161-generic-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3264/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3264/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3263
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3263/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3263/comments
https://api.github.com/repos/huggingface/datasets/issues/3263/events
https://github.com/huggingface/datasets/issues/3263
1,052,552,516
I_kwDODunzps4-vK1E
3,263
FET DATA
{ "login": "FStell01", "id": 90987031, "node_id": "MDQ6VXNlcjkwOTg3MDMx", "avatar_url": "https://avatars.githubusercontent.com/u/90987031?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FStell01", "html_url": "https://github.com/FStell01", "followers_url": "https://api.github.com/users/FStell01/followers", "following_url": "https://api.github.com/users/FStell01/following{/other_user}", "gists_url": "https://api.github.com/users/FStell01/gists{/gist_id}", "starred_url": "https://api.github.com/users/FStell01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FStell01/subscriptions", "organizations_url": "https://api.github.com/users/FStell01/orgs", "repos_url": "https://api.github.com/users/FStell01/repos", "events_url": "https://api.github.com/users/FStell01/events{/privacy}", "received_events_url": "https://api.github.com/users/FStell01/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,636,782,366,000
1,636,810,307,000
1,636,810,307,000
NONE
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3263/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3262
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3262/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3262/comments
https://api.github.com/repos/huggingface/datasets/issues/3262/events
https://github.com/huggingface/datasets/pull/3262
1,052,455,082
PR_kwDODunzps4uej4t
3,262
asserts replaced with exception for image classification task, csv, json
{ "login": "manisnesan", "id": 153142, "node_id": "MDQ6VXNlcjE1MzE0Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/153142?v=4", "gravatar_id": "", "url": "https://api.github.com/users/manisnesan", "html_url": "https://github.com/manisnesan", "followers_url": "https://api.github.com/users/manisnesan/followers", "following_url": "https://api.github.com/users/manisnesan/following{/other_user}", "gists_url": "https://api.github.com/users/manisnesan/gists{/gist_id}", "starred_url": "https://api.github.com/users/manisnesan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manisnesan/subscriptions", "organizations_url": "https://api.github.com/users/manisnesan/orgs", "repos_url": "https://api.github.com/users/manisnesan/repos", "events_url": "https://api.github.com/users/manisnesan/events{/privacy}", "received_events_url": "https://api.github.com/users/manisnesan/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,636,756,499,000
1,636,974,517,000
1,636,974,517,000
CONTRIBUTOR
null
Fixes for csv, json in io module and image_classification task with tests referenced in https://github.com/huggingface/datasets/issues/3171
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3262/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3262/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3262", "html_url": "https://github.com/huggingface/datasets/pull/3262", "diff_url": "https://github.com/huggingface/datasets/pull/3262.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3262.patch", "merged_at": 1636974517000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3261
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3261/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3261/comments
https://api.github.com/repos/huggingface/datasets/issues/3261/events
https://github.com/huggingface/datasets/issues/3261
1,052,346,381
I_kwDODunzps4-uYgN
3,261
Scifi_TV_Shows: Having trouble getting viewer to find appropriate files
{ "login": "lara-martin", "id": 37913218, "node_id": "MDQ6VXNlcjM3OTEzMjE4", "avatar_url": "https://avatars.githubusercontent.com/u/37913218?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lara-martin", "html_url": "https://github.com/lara-martin", "followers_url": "https://api.github.com/users/lara-martin/followers", "following_url": "https://api.github.com/users/lara-martin/following{/other_user}", "gists_url": "https://api.github.com/users/lara-martin/gists{/gist_id}", "starred_url": "https://api.github.com/users/lara-martin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lara-martin/subscriptions", "organizations_url": "https://api.github.com/users/lara-martin/orgs", "repos_url": "https://api.github.com/users/lara-martin/repos", "events_url": "https://api.github.com/users/lara-martin/events{/privacy}", "received_events_url": "https://api.github.com/users/lara-martin/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hi ! I think this is because `iter_archive` doesn't support ZIP files yet. See https://github.com/huggingface/datasets/issues/3272\r\n\r\nYou can navigate into the archive this way instead:\r\n```python\r\n# in split_generators\r\ndata_dir = dl_manager.download_and_extract(url)\r\ntrain_filepath = os.path.join(data_dir, \"all-sci-fi-data-train.txt\")\r\nreturn [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN,\r\n gen_kwargs={\r\n \"filepath\": train_filepath,\r\n },\r\n ),\r\n...\r\n])\r\n\r\n# in generate_examples\r\nwith open(filepath, encoding=\"utf-8\") as f:\r\n ...\r\n```", "It's working: https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/viewer/Scifi_TV_Shows/test\r\n\r\n<img width=\"1494\" alt=\"Capture d’écran 2021-12-21 à 11 23 51\" src=\"https://user-images.githubusercontent.com/1676121/146914068-f4b7225f-42c5-471d-9c73-2adac722162f.png\">\r\n" ]
1,636,745,119,000
1,640,082,250,000
1,640,082,250,000
NONE
null
## Dataset viewer issue for '*Science Fiction TV Show Plots Corpus (Scifi_TV_Shows)*' **Link:** [link](https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows) I tried adding both a script (https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/blob/main/Scifi_TV_Shows.py) and some dummy examples (https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/tree/main/dummy), but the viewer still has a 404 error ("Not found. Maybe the cache is missing, or maybe the ressource does not exist."). I'm not sure what to try next. Thanks in advance! Am I the one who added this dataset? Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3261/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3260
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3260/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3260/comments
https://api.github.com/repos/huggingface/datasets/issues/3260/events
https://github.com/huggingface/datasets/pull/3260
1,052,247,373
PR_kwDODunzps4ueCIU
3,260
Fix ConnectionError in Scielo dataset
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "The CI error is unrelated to the change." ]
1,636,740,157,000
1,637,086,697,000
1,637,085,322,000
CONTRIBUTOR
null
This PR: * allows 403 status code in HEAD requests to S3 buckets to fix the connection error in the Scielo dataset (instead of `url`, uses `response.url` to check the URL of the final endpoint) * makes the Scielo dataset streamable Fixes #3255.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3260/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3260/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3260", "html_url": "https://github.com/huggingface/datasets/pull/3260", "diff_url": "https://github.com/huggingface/datasets/pull/3260.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3260.patch", "merged_at": 1637085322000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3259
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3259/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3259/comments
https://api.github.com/repos/huggingface/datasets/issues/3259/events
https://github.com/huggingface/datasets/pull/3259
1,052,189,775
PR_kwDODunzps4ud5W3
3,259
Updating details of IRC disentanglement data
{ "login": "jkkummerfeld", "id": 1298052, "node_id": "MDQ6VXNlcjEyOTgwNTI=", "avatar_url": "https://avatars.githubusercontent.com/u/1298052?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jkkummerfeld", "html_url": "https://github.com/jkkummerfeld", "followers_url": "https://api.github.com/users/jkkummerfeld/followers", "following_url": "https://api.github.com/users/jkkummerfeld/following{/other_user}", "gists_url": "https://api.github.com/users/jkkummerfeld/gists{/gist_id}", "starred_url": "https://api.github.com/users/jkkummerfeld/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jkkummerfeld/subscriptions", "organizations_url": "https://api.github.com/users/jkkummerfeld/orgs", "repos_url": "https://api.github.com/users/jkkummerfeld/repos", "events_url": "https://api.github.com/users/jkkummerfeld/events{/privacy}", "received_events_url": "https://api.github.com/users/jkkummerfeld/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Thank you for the cleanup!" ]
1,636,737,418,000
1,637,255,973,000
1,637,255,973,000
CONTRIBUTOR
null
I was pleasantly surprised to find that someone had already added my dataset to the huggingface library, but some details were missing or incorrect. This PR fixes the documentation.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3259/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3259/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3259", "html_url": "https://github.com/huggingface/datasets/pull/3259", "diff_url": "https://github.com/huggingface/datasets/pull/3259.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3259.patch", "merged_at": 1637255973000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3258
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3258/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3258/comments
https://api.github.com/repos/huggingface/datasets/issues/3258/events
https://github.com/huggingface/datasets/issues/3258
1,052,188,195
I_kwDODunzps4-tx4j
3,258
Reload dataset that was already downloaded with `load_from_disk` from cloud storage
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }
[]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[]
1,636,737,299,000
1,636,737,299,000
null
MEMBER
null
`load_from_disk` downloads the dataset to a temporary directory without checking if the dataset has already been downloaded once. It would be nice to have some sort of caching for datasets downloaded this way. This could leverage the fingerprint of the dataset that was saved in the `state.json` file.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3258/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3258/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/3257
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3257/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3257/comments
https://api.github.com/repos/huggingface/datasets/issues/3257/events
https://github.com/huggingface/datasets/issues/3257
1,052,118,365
I_kwDODunzps4-tg1d
3,257
Use f-strings for string formatting
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
{ "login": "Mehdi2402", "id": 56029953, "node_id": "MDQ6VXNlcjU2MDI5OTUz", "avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mehdi2402", "html_url": "https://github.com/Mehdi2402", "followers_url": "https://api.github.com/users/Mehdi2402/followers", "following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}", "gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions", "organizations_url": "https://api.github.com/users/Mehdi2402/orgs", "repos_url": "https://api.github.com/users/Mehdi2402/repos", "events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}", "received_events_url": "https://api.github.com/users/Mehdi2402/received_events", "type": "User", "site_admin": false }
[ { "login": "Mehdi2402", "id": 56029953, "node_id": "MDQ6VXNlcjU2MDI5OTUz", "avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mehdi2402", "html_url": "https://github.com/Mehdi2402", "followers_url": "https://api.github.com/users/Mehdi2402/followers", "following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}", "gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions", "organizations_url": "https://api.github.com/users/Mehdi2402/orgs", "repos_url": "https://api.github.com/users/Mehdi2402/repos", "events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}", "received_events_url": "https://api.github.com/users/Mehdi2402/received_events", "type": "User", "site_admin": false } ]
{ "url": "", "html_url": "", "labels_url": "", "id": 0, "node_id": "", "number": 0, "title": "", "description": "", "creator": { "login": "", "id": 0, "node_id": "", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "", "gists_url": "", "starred_url": "", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "", "received_events_url": "", "type": "", "site_admin": false }, "open_issues": 0, "closed_issues": 0, "state": "", "created_at": 0, "updated_at": 0, "due_on": 0, "closed_at": null }
[ "Hi, I would be glad to help with this. Is there anyone else working on it?", "Hi, I would be glad to work on this too.", "#self-assign", "Hi @Carlosbogo,\r\n\r\nwould you be interested in replacing the `.format` and `%` syntax with f-strings in the modules in the `datasets` directory since @Mehdi2402 has opened a PR that does that for all the other directories?", "Oh I see. I will be glad to help with the `datasets` directory then." ]
1,636,732,935,000
1,637,165,918,000
1,637,165,918,000
CONTRIBUTOR
null
f-strings offer better readability/performance than `str.format` and `%`, so we should use them in all places in our codebase unless there is good reason to keep the older syntax. > **NOTE FOR CONTRIBUTORS**: To avoid large PRs and possible merge conflicts, do 1-3 modules per PR. Also, feel free to ignore the files located under `datasets/*`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3257/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3257/timeline
null
null
{ "url": "", "html_url": "", "diff_url": "", "patch_url": "", "merged_at": 0 }
false