url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.08B
1.73B
| node_id
stringlengths 18
19
| number
int64 3.45k
5.9k
| title
stringlengths 1
290
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
36.2k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4360 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4360/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4360/comments | https://api.github.com/repos/huggingface/datasets/issues/4360/events | https://github.com/huggingface/datasets/pull/4360 | 1,237,239,096 | PR_kwDODunzps434izs | 4,360 | Fix example in opus_ubuntu, Add license info | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"CI seems to fail due to languages incorrectly being flagged as invalid, I guess that's related to the currently-broken bcp47 validation (see #4304)",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-16T14:22:28 | 2022-06-01T13:06:07 | 2022-06-01T12:57:09 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4360",
"html_url": "https://github.com/huggingface/datasets/pull/4360",
"diff_url": "https://github.com/huggingface/datasets/pull/4360.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4360.patch",
"merged_at": "2022-06-01T12:57:09"
} | This PR
* fixes a typo in the example for the`opus_ubuntu` dataset where it's mistakenly referred to as `ubuntu`
* adds the declared license info for this corpus' origin
* adds an example instance
* updates the data origin type | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4360/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4359 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4359/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4359/comments | https://api.github.com/repos/huggingface/datasets/issues/4359/events | https://github.com/huggingface/datasets/pull/4359 | 1,237,149,578 | PR_kwDODunzps434Pb6 | 4,359 | Fix Version equality | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-16T13:19:26 | 2022-05-24T16:25:37 | 2022-05-24T16:17:14 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4359",
"html_url": "https://github.com/huggingface/datasets/pull/4359",
"diff_url": "https://github.com/huggingface/datasets/pull/4359.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4359.patch",
"merged_at": "2022-05-24T16:17:14"
} | I think `Version` equality should align with other similar cases in Python, like:
```python
In [1]: "a" == 5, "a" == None
Out[1]: (False, False)
In [2]: "a" != 5, "a" != None
Out[2]: (True, True)
```
With this PR, we will get:
```python
In [3]: Version("1.0.0") == 5, Version("1.0.0") == None
Out[3]: (False, False)
In [4]: Version("1.0.0") != 5, Version("1.0.0") != None
Out[4]: (True, True)
```
Note I found this issue when `doc-builder` tried to compare:
```python
if param.default != inspect._empty
```
where `param.default` is an instance of `Version`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4359/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4358 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4358/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4358/comments | https://api.github.com/repos/huggingface/datasets/issues/4358/events | https://github.com/huggingface/datasets/issues/4358 | 1,237,147,692 | I_kwDODunzps5JvWAs | 4,358 | Missing dataset tags and sections in some dataset cards | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"@lhoestq I can take this issue. Please can you point out to me where I can find the other positional arguments?",
"Hi @RohitRathore1 :)\r\n\r\nYou can find all the YAML tags in the tagging app here: https://hf.co/spaces/huggingface/datasets-tagging). They're all passed as arguments to a DatasetMetadata object used to validate the tags."
] | 2022-05-16T13:18:16 | 2022-05-30T15:36:52 | null | CONTRIBUTOR | null | null | null | Summary of CircleCI errors for different dataset metadata:
- **BoolQ**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'
- **Conllpp**: expected some content in section `Citation Information` but it is empty.
- **GLUE**: 'annotations_creators', 'language_creators', 'source_datasets' :['unknown'] are not registered tags
- **ConLL2003**: field 'task_ids': ['part-of-speech-tagging'] are not registered tags for 'task_ids'
- **Hate_speech18:** Expected some content in section `Data Instances` but it is empty, Expected some content in section `Data Splits` but it is empty
- **Jjigsaw_toxicity_pred**: `Citation Information` but it is empty.
- **LIAR** : `Data Instances`,`Data Fields`, `Data Splits`, `Citation Information` are empty.
- **MSRA NER** : Dataset Summary`, `Data Instances`, `Data Fields`, `Data Splits`, `Citation Information` are empty.
- **sem_eval_2010_task_8**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'
- **sms_spam**: `Data Instances` and`Data Splits` are empty.
- **Quora** : Expected some content in section `Citation Information` but it is empty, missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'
- **sentiment140**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids' | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4358/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4357 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4357/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4357/comments | https://api.github.com/repos/huggingface/datasets/issues/4357/events | https://github.com/huggingface/datasets/pull/4357 | 1,237,037,069 | PR_kwDODunzps4333b9 | 4,357 | Fix warning in push_to_hub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-16T11:50:17 | 2022-05-16T15:18:49 | 2022-05-16T15:10:41 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4357",
"html_url": "https://github.com/huggingface/datasets/pull/4357",
"diff_url": "https://github.com/huggingface/datasets/pull/4357.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4357.patch",
"merged_at": "2022-05-16T15:10:41"
} | Fix warning:
```
FutureWarning: 'shard_size' was renamed to 'max_shard_size' in version 2.1.1 and will be removed in 2.4.0.
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4357/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4356 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4356/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4356/comments | https://api.github.com/repos/huggingface/datasets/issues/4356/events | https://github.com/huggingface/datasets/pull/4356 | 1,236,846,308 | PR_kwDODunzps433OsB | 4,356 | Fix dataset builder default version | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"This PR requires one of these other PRs being merged first:\r\n- #4359 \r\n- huggingface/doc-builder#211"
] | 2022-05-16T09:05:10 | 2022-05-30T13:56:58 | 2022-05-30T13:47:54 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4356",
"html_url": "https://github.com/huggingface/datasets/pull/4356",
"diff_url": "https://github.com/huggingface/datasets/pull/4356.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4356.patch",
"merged_at": "2022-05-30T13:47:54"
} | Currently, when using a custom config (subclass of `BuilderConfig`), default version set at the builder level is ignored: we must set default version in the custom config class.
However, when loading a dataset with `config_kwargs` (for a configuration not present in `BUILDER_CONFIGS`), the default version set in the custom config is ignored and "0.0.0" is used instead:
```python
ds = load_dataset("wikipedia", language="co", date="20220501", beam_runner="DirectRunner")
```
generates the following config:
```python
WikipediaConfig(name='20220501.co', version=0.0.0, data_dir=None, data_files=None, description='Wikipedia dataset for co, parsed from 20220501 dump.')
```
with version "0.0.0" instead of "2.0.0".
See as a counter-example, when the config is present in `BUILDER_CONFIGS`:
```python
ds = load_dataset("wikipedia", "20220301.fr", beam_runner="DirectRunner")
```
generates the following config:
```python
WikipediaConfig(name='20220301.fr', version=2.0.0, data_dir=None, data_files=None, description='Wikipedia dataset for fr, parsed from 20220301 dump.')
```
with correct version "2.0.0", as set in the custom config class.
The reason for this is that `DatasetBuilder` has a default VERSION ("0.0.0") that overwrites the default version set at the custom config class.
This PR:
- Removes the default VERSION at `DatasetBuilder` (set to None, so that the class attribute exists but it does not override the custom config default version).
- Note that the `BuilderConfig` class already sets a default version = "0.0.0"; no need to pass this from the builder. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4356/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4355 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4355/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4355/comments | https://api.github.com/repos/huggingface/datasets/issues/4355/events | https://github.com/huggingface/datasets/pull/4355 | 1,236,797,490 | PR_kwDODunzps433EgP | 4,355 | Fix warning in upload_file | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-16T08:21:31 | 2022-05-16T11:28:02 | 2022-05-16T11:19:57 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4355",
"html_url": "https://github.com/huggingface/datasets/pull/4355",
"diff_url": "https://github.com/huggingface/datasets/pull/4355.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4355.patch",
"merged_at": "2022-05-16T11:19:57"
} | Fix warning:
```
FutureWarning: Pass path_or_fileobj='...' as keyword args. From version 0.7 passing these as positional arguments will result in an error
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4355/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4354 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4354/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4354/comments | https://api.github.com/repos/huggingface/datasets/issues/4354/events | https://github.com/huggingface/datasets/issues/4354 | 1,236,404,383 | I_kwDODunzps5Jsgif | 4,354 | Problems with WMT dataset | {
"login": "eldarkurtic",
"id": 8884008,
"node_id": "MDQ6VXNlcjg4ODQwMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8884008?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eldarkurtic",
"html_url": "https://github.com/eldarkurtic",
"followers_url": "https://api.github.com/users/eldarkurtic/followers",
"following_url": "https://api.github.com/users/eldarkurtic/following{/other_user}",
"gists_url": "https://api.github.com/users/eldarkurtic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eldarkurtic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eldarkurtic/subscriptions",
"organizations_url": "https://api.github.com/users/eldarkurtic/orgs",
"repos_url": "https://api.github.com/users/eldarkurtic/repos",
"events_url": "https://api.github.com/users/eldarkurtic/events{/privacy}",
"received_events_url": "https://api.github.com/users/eldarkurtic/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi! Yes, the docs are outdated. Expect this to be fixed soon. \r\n\r\nIn the meantime, you can try to fix the issue yourself.\r\n\r\nThese are the configs/language pairs supported by `wmt15` from which you can choose:\r\n* `cs-en` (Czech - English)\r\n* `de-en` (German - English)\r\n* `fi-en` (Finnish- English)\r\n* `fr-en` (French - English)\r\n* `ru-en` (Russian - English)\r\n\r\nAnd the current implementation always uses all the subsets available for a language, so to define custom subsets, you'll have to clone the repo from the Hub and replace the line https://huggingface.co/datasets/wmt15/blob/main/wmt_utils.py#L688 with:\r\n`for split, ss_names in (self._subsets if self.config.subsets is None else self.config.subsets).items()`\r\n\r\nThen, you can load the dataset as follows:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset(\"path/to/local/wmt15_folder\", \"<one of 5 available configs>\", subsets=...)",
"@mariosasko thanks a lot for the suggested fix! ",
"Hi @mariosasko \r\n\r\nAre the docs updated? If not, I would like to get on it. I am new around here, would we helpful, if you can guide.\r\n\r\nThanks",
"Hi @khushmeeet! The docs haven't been updated, so feel free to work on this issue. This is a tricky issue, so I'll give the steps you can follow to fix this:\r\n\r\nFirst, this code:\r\nhttps://github.com/huggingface/datasets/blob/7cff5b9726a223509dbd6224de3f5f452c8d924f/src/datasets/load.py#L113-L118\r\n\r\nneeds to be replaced with (makes the dataset builder search more robust and allows us to remove the ABC stuff from `wmt_utils.py`):\r\n```python\r\n for name, obj in module.__dict__.items():\r\n if inspect.isclass(obj) and issubclass(obj, main_cls_type):\r\n if inspect.isabstract(obj):\r\n continue\r\n module_main_cls = obj\r\n obj_module = inspect.getmodule(obj)\r\n if obj_module is not None and module == obj_module:\r\n break\r\n```\r\n\r\nThen, all the `wmt_utils.py` scripts need to be updated as follows (these are the diffs with the requiered changes):\r\n````diff\r\n import os\r\n import re\r\n import xml.etree.cElementTree as ElementTree\r\n-from abc import ABC, abstractmethod\r\n\r\n import datasets\r\n````\r\n\r\n````diff\r\nlogger = datasets.logging.get_logger(__name__)\r\n\r\n\r\n _DESCRIPTION = \"\"\"\\\r\n-Translate dataset based on the data from statmt.org.\r\n+Translation dataset based on the data from statmt.org.\r\n\r\n-Versions exists for the different years using a combination of multiple data\r\n-sources. The base `wmt_translate` allows you to create your own config to choose\r\n-your own data/language pair by creating a custom `datasets.translate.wmt.WmtConfig`.\r\n+Versions exist for different years using a combination of data\r\n+sources. The base `wmt` allows you to create a custom dataset by choosing\r\n+your own data/language pair. This can be done as follows:\r\n\r\n ```\r\n-config = datasets.wmt.WmtConfig(\r\n- version=\"0.0.1\",\r\n+from datasets import inspect_dataset, load_dataset_builder\r\n+\r\n+inspect_dataset(\"<insert the dataset name\", \"path/to/scripts\")\r\n+builder = load_dataset_builder(\r\n+ \"path/to/scripts/wmt_utils.py\",\r\n language_pair=(\"fr\", \"de\"),\r\n subsets={\r\n datasets.Split.TRAIN: [\"commoncrawl_frde\"],\r\n datasets.Split.VALIDATION: [\"euelections_dev2019\"],\r\n },\r\n )\r\n-builder = datasets.builder(\"wmt_translate\", config=config)\r\n-```\r\n\r\n+# Standard version\r\n+builder.download_and_prepare()\r\n+ds = builder.as_dataset()\r\n+\r\n+# Streamable version\r\n+ds = builder.as_streaming_dataset()\r\n+```\r\n \"\"\"\r\n````\r\n\r\n````diff\r\n+class Wmt(datasets.GeneratorBasedBuilder):\r\n \"\"\"WMT translation dataset.\"\"\"\r\n+\r\n+ BUILDER_CONFIG_CLASS = WmtConfig\r\n\r\n def __init__(self, *args, **kwargs):\r\n- if type(self) == Wmt and \"config\" not in kwargs: # pylint: disable=unidiomatic-typecheck\r\n- raise ValueError(\r\n- \"The raw `wmt_translate` can only be instantiated with the config \"\r\n- \"kwargs. You may want to use one of the `wmtYY_translate` \"\r\n- \"implementation instead to get the WMT dataset for a specific year.\"\r\n- )\r\n super(Wmt, self).__init__(*args, **kwargs)\r\n\r\n @property\r\n- @abstractmethod\r\n def _subsets(self):\r\n \"\"\"Subsets that make up each split of the dataset.\"\"\"\r\n````\r\n```diff\r\n \"\"\"Subsets that make up each split of the dataset for the language pair.\"\"\"\r\n source, target = self.config.language_pair\r\n filtered_subsets = {}\r\n- for split, ss_names in self._subsets.items():\r\n+ subsets = self._subsets if self.config.subsets is None else self.config.subsets\r\n+ for split, ss_names in subsets.items():\r\n filtered_subsets[split] = []\r\n for ss_name in ss_names:\r\n dataset = DATASET_MAP[ss_name]\r\n```\r\n\r\n`wmt14`, `wmt15`, `wmt16`, `wmt17`, `wmt18`, `wmt19` and `wmt_t2t` have this script, so all of them need to be updated. Also, the dataset summaries from the READMEs of these datasets need to be updated to match the new `_DESCRIPTION` string. And that's it! Let me know if you need additional help.",
"Hi @mariosasko ,\r\n\r\nI have made the changes as suggested by you and have opened a PR #4537.\r\n\r\nThanks",
"Resolved via #4554 "
] | 2022-05-15T20:58:26 | 2022-07-11T14:54:02 | 2022-07-11T14:54:01 | NONE | null | null | null | ## Describe the bug
I am trying to load WMT15 dataset and to define which data-sources to use for train/validation/test splits, but unfortunately it seems that the official documentation at [https://huggingface.co/datasets/wmt15#:~:text=Versions%20exists%20for,wmt_translate%22%2C%20config%3Dconfig)](https://huggingface.co/datasets/wmt15#:~:text=Versions%20exists%20for,wmt_translate%22%2C%20config%3Dconfig)) doesn't work anymore.
## Steps to reproduce the bug
```shell
>>> import datasets
>>> a = datasets.translate.wmt.WmtConfig()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'datasets' has no attribute 'translate'
>>> a = datasets.wmt.WmtConfig()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'datasets' has no attribute 'wmt'
```
## Expected results
To load WMT15 with given data-sources.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Linux-5.10.0-10-amd64-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4354/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4353 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4353/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4353/comments | https://api.github.com/repos/huggingface/datasets/issues/4353/events | https://github.com/huggingface/datasets/pull/4353 | 1,236,092,176 | PR_kwDODunzps43016x | 4,353 | Don't strip proceeding hyphen | {
"login": "JohnGiorgi",
"id": 8917831,
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnGiorgi",
"html_url": "https://github.com/JohnGiorgi",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-14T18:25:29 | 2022-05-16T18:51:38 | 2022-05-16T13:52:11 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4353",
"html_url": "https://github.com/huggingface/datasets/pull/4353",
"diff_url": "https://github.com/huggingface/datasets/pull/4353.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4353.patch",
"merged_at": "2022-05-16T13:52:10"
} | Closes #4320. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4353/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4352 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4352/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4352/comments | https://api.github.com/repos/huggingface/datasets/issues/4352/events | https://github.com/huggingface/datasets/issues/4352 | 1,236,086,170 | I_kwDODunzps5JrS2a | 4,352 | When using `dataset.map()` if passed `Features` types do not match what is returned from the mapped function, execution does not except in an obvious way | {
"login": "plamb-viso",
"id": 99206017,
"node_id": "U_kgDOBenDgQ",
"avatar_url": "https://avatars.githubusercontent.com/u/99206017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/plamb-viso",
"html_url": "https://github.com/plamb-viso",
"followers_url": "https://api.github.com/users/plamb-viso/followers",
"following_url": "https://api.github.com/users/plamb-viso/following{/other_user}",
"gists_url": "https://api.github.com/users/plamb-viso/gists{/gist_id}",
"starred_url": "https://api.github.com/users/plamb-viso/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/plamb-viso/subscriptions",
"organizations_url": "https://api.github.com/users/plamb-viso/orgs",
"repos_url": "https://api.github.com/users/plamb-viso/repos",
"events_url": "https://api.github.com/users/plamb-viso/events{/privacy}",
"received_events_url": "https://api.github.com/users/plamb-viso/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! Thanks for reporting :) `datasets` usually returns a `pa.lib.ArrowInvalid` error if the feature types don't match.\r\n\r\nIt would be awesome if we had a way to reproduce the `OverflowError` in this case, to better understand what happened and be able to provide the best error message"
] | 2022-05-14T17:55:15 | 2022-05-16T15:09:17 | null | NONE | null | null | null | ## Describe the bug
Recently I was trying to using `.map()` to preprocess a dataset. I defined the expected Features and passed them into `.map()` like `dataset.map(preprocess_data, features=features)`. My expected `Features` keys matched what came out of `preprocess_data`, but the types i had defined for them did not match the types that came back. Because of this, i ended up in tracebacks deep inside arrow_dataset.py and arrow_writer.py with exceptions that [did not make clear what the problem was](https://github.com/huggingface/datasets/issues/4349). In short i ended up with overflows and the OS killing processes when Arrow was attempting to write. It wasn't until I dug into `def write_batch` and the loop that loops over cols that I figured out what was going on.
It seems like `.map()` could set a boolean that it's checked that for at least 1 instance from the dataset, the returned data's types match the types provided by the `features` param and error out with a clear exception if they don't. This would make the cause of the issue much more understandable and save people time. This could be construed as a feature but it feels more like a bug to me.
## Steps to reproduce the bug
I don't have explicit code to repro the bug, but ill show an example
Code prior to the fix:
```python
def preprocess(examples):
# returns an encoded data dict with keys that match the features, but the types do not match
...
def get_encoded_data(data):
dataset = Dataset.from_pandas(data)
unique_labels = data['audit_type'].unique().tolist()
features = Features({
'image': Array3D(dtype="uint8", shape=(3, 224, 224))),
'input_ids': Sequence(feature=Value(dtype='int64'))),
'attention_mask': Sequence(Value(dtype='int64'))),
'token_type_ids': Sequence(Value(dtype='int64'))),
'bbox': Array2D(dtype="int64", shape=(512, 4))),
'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels),
})
encoded_dataset = dataset.map(preprocess_data, features=features, remove_columns=dataset.column_names)
```
The Features set that fixed it:
```python
features = Features({
'image': Sequence(Array3D(dtype="uint8", shape=(3, 224, 224))),
'input_ids': Sequence(Sequence(feature=Value(dtype='int64'))),
'attention_mask': Sequence(Sequence(Value(dtype='int64'))),
'token_type_ids': Sequence(Sequence(Value(dtype='int64'))),
'bbox': Sequence(Array2D(dtype="int64", shape=(512, 4))),
'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels),
})
```
The difference between my original code (which was based on documentation) and the working code is the addition of the `Sequence(...)` to 4/5 features as I am working with paginated data and the doc examples are not.
## Expected results
Dataset.map() attempts to validate the data types for each Feature on the first iteration and errors out if they are not validated.
## Actual results
Specify the actual results or traceback.
Based on the value of `writer_batch_size`, execution errors out when Arrow attempts to write because the types do not match, though its error messages dont make this obvious
Example errors:
```
OverflowError: There was an overflow with type <class 'list'>. Try to reduce writer_batch_size to have batches smaller than 2GB.
(offset overflow while concatenating arrays)
```
```
zsh: killed python doc_classification.py
UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
datasets version: 2.1.0
Platform: macOS-12.2.1-arm64-arm-64bit
Python version: 3.9.12
PyArrow version: 6.0.1
Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4352/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4352/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4351 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4351/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4351/comments | https://api.github.com/repos/huggingface/datasets/issues/4351/events | https://github.com/huggingface/datasets/issues/4351 | 1,235,950,209 | I_kwDODunzps5JqxqB | 4,351 | Add optional progress bar for .save_to_disk(..) and .load_from_disk(..) when working with remote filesystems | {
"login": "Rexhaif",
"id": 5154447,
"node_id": "MDQ6VXNlcjUxNTQ0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5154447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rexhaif",
"html_url": "https://github.com/Rexhaif",
"followers_url": "https://api.github.com/users/Rexhaif/followers",
"following_url": "https://api.github.com/users/Rexhaif/following{/other_user}",
"gists_url": "https://api.github.com/users/Rexhaif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rexhaif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rexhaif/subscriptions",
"organizations_url": "https://api.github.com/users/Rexhaif/orgs",
"repos_url": "https://api.github.com/users/Rexhaif/repos",
"events_url": "https://api.github.com/users/Rexhaif/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rexhaif/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! I like this idea. For consistency with `load_dataset`, we can use `fsspec`'s `TqdmCallback` in `.load_from_disk` to monitor the number of bytes downloaded, and in `.save_to_disk`, we can track the number of saved shards for consistency with `push_to_hub` (after we implement https://github.com/huggingface/datasets/issues/4196)."
] | 2022-05-14T11:30:42 | 2022-12-14T18:22:59 | 2022-12-14T18:22:59 | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
When working with large datasets stored on remote filesystems(such as s3), the process of uploading a dataset could take really long time. For instance: I was uploading a re-processed version of wmt17 en-ru to my s3 bucket and it took like 35 minutes(and that's given that I have a fiber optic connection). The only output during that process was a progress bar for flattening indices and then ~35 minutes of complete silence.
**Describe the solution you'd like**
I want to be able to enable a progress bar when calling .save_to_disk(..) and .load_from_disk(..), it would track either amount of bytes sent/received or amount of records written/loaded, and will give some ETA. Basically just tqdm.
**Describe alternatives you've considered**
- Save dataset to tmp folder at the disk and then upload it using custom wrapper over botocore, which will work with progress bar, like [this](https://alexwlchan.net/2021/04/s3-progress-bars/). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4351/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4350 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4350/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4350/comments | https://api.github.com/repos/huggingface/datasets/issues/4350/events | https://github.com/huggingface/datasets/pull/4350 | 1,235,505,104 | PR_kwDODunzps43zKIV | 4,350 | Add a new metric: CTC_Consistency | {
"login": "YEdenZ",
"id": 92551194,
"node_id": "U_kgDOBYQ4Gg",
"avatar_url": "https://avatars.githubusercontent.com/u/92551194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YEdenZ",
"html_url": "https://github.com/YEdenZ",
"followers_url": "https://api.github.com/users/YEdenZ/followers",
"following_url": "https://api.github.com/users/YEdenZ/following{/other_user}",
"gists_url": "https://api.github.com/users/YEdenZ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YEdenZ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YEdenZ/subscriptions",
"organizations_url": "https://api.github.com/users/YEdenZ/orgs",
"repos_url": "https://api.github.com/users/YEdenZ/repos",
"events_url": "https://api.github.com/users/YEdenZ/events{/privacy}",
"received_events_url": "https://api.github.com/users/YEdenZ/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for your contribution, @YEdenZ.\r\n\r\nPlease note that our old `metrics` module is in the process of being incorporated to a separate library called `evaluate`: https://github.com/huggingface/evaluate\r\n\r\nTherefore, I would ask you to transfer your PR to that repository. Thank you."
] | 2022-05-13T17:31:19 | 2022-05-19T10:23:04 | 2022-05-19T10:23:03 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4350",
"html_url": "https://github.com/huggingface/datasets/pull/4350",
"diff_url": "https://github.com/huggingface/datasets/pull/4350.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4350.patch",
"merged_at": null
} | Add CTC_Consistency metric
Do I also need to modify the `test_metric_common.py` file to make it run on test? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4350/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4349 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4349/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4349/comments | https://api.github.com/repos/huggingface/datasets/issues/4349/events | https://github.com/huggingface/datasets/issues/4349 | 1,235,474,765 | I_kwDODunzps5Jo9lN | 4,349 | Dataset.map()'s fails at any value of parameter writer_batch_size | {
"login": "plamb-viso",
"id": 99206017,
"node_id": "U_kgDOBenDgQ",
"avatar_url": "https://avatars.githubusercontent.com/u/99206017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/plamb-viso",
"html_url": "https://github.com/plamb-viso",
"followers_url": "https://api.github.com/users/plamb-viso/followers",
"following_url": "https://api.github.com/users/plamb-viso/following{/other_user}",
"gists_url": "https://api.github.com/users/plamb-viso/gists{/gist_id}",
"starred_url": "https://api.github.com/users/plamb-viso/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/plamb-viso/subscriptions",
"organizations_url": "https://api.github.com/users/plamb-viso/orgs",
"repos_url": "https://api.github.com/users/plamb-viso/repos",
"events_url": "https://api.github.com/users/plamb-viso/events{/privacy}",
"received_events_url": "https://api.github.com/users/plamb-viso/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Note that this same issue occurs even if i preprocess with the more default way of tokenizing that uses LayoutLMv2Processor's internal OCR:\r\n\r\n```python\r\n feature_extractor = LayoutLMv2FeatureExtractor()\r\n tokenizer = LayoutLMv2Tokenizer.from_pretrained(\"microsoft/layoutlmv2-base-uncased\")\r\n processor = LayoutLMv2Processor(feature_extractor, tokenizer)\r\n encoded_inputs = processor(images, padding=\"max_length\", truncation=True)\r\n encoded_inputs[\"image\"] = np.array(encoded_inputs[\"image\"])\r\n encoded_inputs[\"label\"] = examples['label_id']\r\n```",
"Wanted to make sure anyone that finds this also finds my other report: https://github.com/huggingface/datasets/issues/4352",
"Did you close it because you found that it was due to the incorrect Feature types ?",
"Yeah-- my analysis of the issue was wrong in this one so I just closed it while linking to the new issue",
"I met with the same problem when doing some experiments about layoutlm. I tried to set the writer_batch_size to 1, and the error still exists. Is there any solutions to this problem?",
"The problem lies in how your Features are defined. It's erroring out when it actually goes to write them to disk"
] | 2022-05-13T16:55:12 | 2022-06-02T12:51:11 | 2022-05-14T15:08:08 | NONE | null | null | null | ## Describe the bug
If the the value of `writer_batch_size` is less than the total number of instances in the dataset it will fail at that same number of instances. If it is greater than the total number of instances, it fails on the last instance.
Context:
I am attempting to fine-tune a pre-trained HuggingFace transformers model called LayoutLMv2. This model takes three inputs: document images, words and word bounding boxes. [The Processor for this model has two options](https://huggingface.co/docs/transformers/model_doc/layoutlmv2#usage-layoutlmv2processor), the default is passing a document to the Processor and allowing it to create images of the document and use PyTesseract to perform OCR and generate words/bounding boxes. The other option is to provide `revision="no_ocr"` to the pre-trained model which allows you to use your own OCR results (in my case, Amazon Textract) so you have to provide the image, words and bounding boxes yourself. I am using this second option which might be good context for the bug.
I am using the Dataset.map() paradigm to create these three inputs, encode them and save the dataset. Note that my documents (data instances) on average are fairly large and can range from 1 page up to 300 pages.
Code I am using is provided below
## Steps to reproduce the bug
I do not have explicit sample code, but I will paste the code I'm using in case reading it helps. When `.map()` is called, the dataset has 2933 rows, many of which represent large pdf documents.
```python
def get_encoded_data(data):
dataset = Dataset.from_pandas(data)
unique_labels = data['label'].unique()
features = Features({
'image': Array3D(dtype="int64", shape=(3, 224, 224)),
'input_ids': Sequence(feature=Value(dtype='int64')),
'attention_mask': Sequence(Value(dtype='int64')),
'token_type_ids': Sequence(Value(dtype='int64')),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels),
})
encoded_dataset = dataset.map(preprocess_data, features=features, remove_columns=dataset.column_names, writer_batch_size=dataset.num_rows+1)
encoded_dataset.save_to_disk(TRAINING_DATA_PATH + ENCODED_DATASET_NAME)
encoded_dataset.set_format(type="torch")
return encoded_dataset
```
```python
PROCESSOR = LayoutLMv2Processor.from_pretrained(MODEL_PATH, revision="no_ocr", use_fast=False)
def preprocess_data(examples):
directory = os.path.join(FILES_PATH, examples['file_location'])
images_dir = os.path.join(directory, PDF_IMAGE_DIR)
textract_response_path = os.path.join(directory, 'textract.json')
doc_meta_path = os.path.join(directory, 'doc_meta.json')
textract_document = get_textract_document(textract_response_path, doc_meta_path)
images, words, bboxes = get_doc_training_data(images_dir, textract_document)
encoded_inputs = PROCESSOR(images, words, boxes=bboxes, padding="max_length", truncation=True)
# https://github.com/NielsRogge/Transformers-Tutorials/issues/36
encoded_inputs["image"] = np.array(encoded_inputs["image"])
encoded_inputs["label"] = examples['label_id']
return encoded_inputs
```
## Expected results
My expectation is that `writer_batch_size` allows one to simply trade off performance and memory requirements, not that it must be a specific number for `.map()` to function correctly.
## Actual results
If writer_batch_size is set to a value less than the number of rows, I get either:
```
OverflowError: There was an overflow with type <class 'list'>. Try to reduce writer_batch_size to have batches smaller than 2GB.
(offset overflow while concatenating arrays)
```
or simply
```
zsh: killed python doc_classification.py
UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
```
If it is greater than the number of rows, i get the `zsh: killed` error above
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.1.0
- Platform: macOS-12.2.1-arm64-arm-64bit
- Python version: 3.9.12
- PyArrow version: 6.0.1
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4349/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4348 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4348/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4348/comments | https://api.github.com/repos/huggingface/datasets/issues/4348/events | https://github.com/huggingface/datasets/issues/4348 | 1,235,432,976 | I_kwDODunzps5JozYQ | 4,348 | `inspect` functions can't fetch dataset script from the Hub | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi, thanks for reporting! `git bisect` points to #2986 as the PR that introduced the bug. Since then, there have been some additional changes to the loading logic, and in the current state, `force_local_path` (set via `local_path`) forbids pulling a script from the internet instead of downloading it: https://github.com/huggingface/datasets/blob/cfae0545b2ba05452e16136cacc7d370b4b186a1/src/datasets/inspect.py#L89-L91\r\n\r\ncc @lhoestq: `force_local_path` is only used in `inspect_dataset` and `inspect_metric`. Is it OK if we revert the behavior to match the old one?",
"Good catch ! Yea I think it's fine :)"
] | 2022-05-13T16:08:26 | 2022-06-09T10:26:06 | 2022-06-09T10:26:06 | MEMBER | null | null | null | The `inspect_dataset` and `inspect_metric` functions are unable to retrieve a dataset or metric script from the Hub and store it locally at the specified `local_path`:
```py
>>> from datasets import inspect_dataset
>>> inspect_dataset('rotten_tomatoes', local_path='path/to/my/local/folder')
FileNotFoundError: Couldn't find a dataset script at /content/rotten_tomatoes/rotten_tomatoes.py or any data file in the same directory.
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4348/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4347 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4347/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4347/comments | https://api.github.com/repos/huggingface/datasets/issues/4347/events | https://github.com/huggingface/datasets/pull/4347 | 1,235,318,064 | PR_kwDODunzps43yihq | 4,347 | Support remote cache_dir | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq thanks for your review.\r\n\r\nPlease note that `xjoin` cannot be used in this context, as it always returns a POSIX path string and this is not suitable on Windows machines.",
"<s>`xjoin` returns windows paths (not posix) on windows, since it just extends`os.path.join` </s>\r\n\r\nActually you are right.\r\n\r\nhttps://github.com/huggingface/datasets/blob/08ec04ccb59630a3029b2ecd8a14d327bddd0c4a/src/datasets/utils/streaming_download_manager.py#L104-L105\r\n\r\nThough this is not an issue because posix paths (as returned by Path().as_posix()) work on windows. That's why we can replace `os.path.join` with `xjoin` in streaming mode. They look like `c:/Program Files/` or something (can't confirm right now, I don't have a windows with me)",
"Until now, we have always replaced \"/\" in paths with `os.path.join` (`os.sep`,...) in order to support Windows paths (that contain r\"\\\\\").\r\n\r\nNow, you suggest ignoring this and work with POSIX strings (with \"/\").\r\n\r\nAs an example, when passing `cache_dir=r\"C:\\Users\\Username\\.mycache\"`:\r\n- Until now, it results in `self._cache_downloaded_dir = r\"C:\\Users\\Username\\.mycache\\downloads\"`\r\n- If we use `xjoin`, it will give `self._cache_downloaded_dir = \"C:/Users/Username/.mycache/downloads\"`\r\n\r\nYou say this is OK and we don't care if we work with POSIX strings on Windows machines.\r\n\r\nI'm incorporating your suggested changes then...",
"Also note that using `xjoin`, if we pass `cache_dir=\"C:\\\\Users\\\\Username\\\\.mycache\"`, we get:\r\n- `self._cache_dir_root = \"C:\\\\Users\\\\Username\\\\.mycache\"`\r\n- `self._cache_downloaded_dir = \"C:/Users/Username/.mycache/downloads\"`",
"It looks like it broke the CI on windows :/ maybe this was not a good idea, sorry"
] | 2022-05-13T14:26:35 | 2022-05-25T16:35:23 | 2022-05-25T16:27:03 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4347",
"html_url": "https://github.com/huggingface/datasets/pull/4347",
"diff_url": "https://github.com/huggingface/datasets/pull/4347.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4347.patch",
"merged_at": "2022-05-25T16:27:03"
} | This PR implements complete support for remote `cache_dir`. Before, the support was just partial.
This is useful to create datasets using Apache Beam (parallel data processing) builder with `cache_dir` in a remote bucket, e.g., for Wikipedia dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4347/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4347/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4346 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4346/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4346/comments | https://api.github.com/repos/huggingface/datasets/issues/4346/events | https://github.com/huggingface/datasets/issues/4346 | 1,235,067,062 | I_kwDODunzps5JnaC2 | 4,346 | GH Action to build documentation never ends | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 2022-05-13T10:44:44 | 2022-05-13T11:22:00 | 2022-05-13T11:22:00 | MEMBER | null | null | null | ## Describe the bug
See: https://github.com/huggingface/datasets/runs/6418035586?check_suite_focus=true
I finally forced the cancel of the workflow. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4346/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4345 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4345/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4345/comments | https://api.github.com/repos/huggingface/datasets/issues/4345/events | https://github.com/huggingface/datasets/pull/4345 | 1,235,062,787 | PR_kwDODunzps43xrky | 4,345 | Fix never ending GH Action to build documentation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-13T10:40:10 | 2022-05-13T11:29:43 | 2022-05-13T11:22:00 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4345",
"html_url": "https://github.com/huggingface/datasets/pull/4345",
"diff_url": "https://github.com/huggingface/datasets/pull/4345.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4345.patch",
"merged_at": "2022-05-13T11:22:00"
} | There was an unclosed code block introduced by:
- #4313
https://github.com/huggingface/datasets/pull/4313/files#diff-f933ce41f71c6c0d1ce658e27de62cbe0b45d777e9e68056dd012ac3eb9324f7R538
This causes the "Make documentation" step in the "Build documentation" workflow to never finish.
- I think this issue should also be addressed in the `doc-builder` lib.
Fix #4346. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4345/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4344 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4344/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4344/comments | https://api.github.com/repos/huggingface/datasets/issues/4344/events | https://github.com/huggingface/datasets/pull/4344 | 1,234,882,542 | PR_kwDODunzps43xFEn | 4,344 | Fix docstring in DatasetDict::shuffle | {
"login": "felixdivo",
"id": 4403130,
"node_id": "MDQ6VXNlcjQ0MDMxMzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4403130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/felixdivo",
"html_url": "https://github.com/felixdivo",
"followers_url": "https://api.github.com/users/felixdivo/followers",
"following_url": "https://api.github.com/users/felixdivo/following{/other_user}",
"gists_url": "https://api.github.com/users/felixdivo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/felixdivo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/felixdivo/subscriptions",
"organizations_url": "https://api.github.com/users/felixdivo/orgs",
"repos_url": "https://api.github.com/users/felixdivo/repos",
"events_url": "https://api.github.com/users/felixdivo/events{/privacy}",
"received_events_url": "https://api.github.com/users/felixdivo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-05-13T08:06:00 | 2022-05-25T09:23:43 | 2022-05-24T15:35:21 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4344",
"html_url": "https://github.com/huggingface/datasets/pull/4344",
"diff_url": "https://github.com/huggingface/datasets/pull/4344.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4344.patch",
"merged_at": "2022-05-24T15:35:21"
} | I think due to #1626, the docstring contained this error ever since `seed` was added. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4344/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4343 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4343/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4343/comments | https://api.github.com/repos/huggingface/datasets/issues/4343/events | https://github.com/huggingface/datasets/issues/4343 | 1,234,864,168 | I_kwDODunzps5Jmogo | 4,343 | Metrics documentation is not accessible in the datasets doc UI | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400959,
"node_id": "MDU6TGFiZWwyMDY3NDAwOTU5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Metric%20discussion",
"name": "Metric discussion",
"color": "d722e8",
"default": false,
"description": "Discussions on the metrics"
}
] | closed | false | null | [] | null | [
"Hey @fxmarty :) Yes we are working on showing the docs of all the metrics on the Hugging face website. If you want to follow the advancements you can check the [evaluate](https://github.com/huggingface/evaluate) repository cc @lvwerra @sashavor "
] | 2022-05-13T07:46:30 | 2022-06-03T08:50:25 | 2022-06-03T08:50:25 | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
Search for a metric name like "seqeval" yields no results on https://huggingface.co/docs/datasets/master/en/index . One needs to go look in `datasets/metrics/README.md` to find the doc. Even in the `README.md`, it can be hard to understand what the metric expects as an input, for example for `squad` there is a [key `id`](https://github.com/huggingface/datasets/blob/1a4c185663a6958f48ec69624473fdc154a36a9d/metrics/squad/squad.py#L42) documented only in the function doc but not in the `README.md`, and one needs to go look into the code to understand what the metric expects.
**Describe the solution you'd like**
Have the documentation for metrics appear as well in the doc UI, e.g. this https://github.com/huggingface/datasets/blob/1a4c185663a6958f48ec69624473fdc154a36a9d/metrics/squad/squad.py#L21-L63
I know there are plans to migrate metrics to the evaluate library, but just pointing this out.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4343/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4342 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4342/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4342/comments | https://api.github.com/repos/huggingface/datasets/issues/4342/events | https://github.com/huggingface/datasets/pull/4342 | 1,234,743,765 | PR_kwDODunzps43woHm | 4,342 | Fix failing CI on Windows for sari and wiki_split metrics | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-05-13T05:03:38 | 2022-05-13T05:47:42 | 2022-05-13T05:47:42 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4342",
"html_url": "https://github.com/huggingface/datasets/pull/4342",
"diff_url": "https://github.com/huggingface/datasets/pull/4342.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4342.patch",
"merged_at": "2022-05-13T05:47:41"
} | This PR adds `sacremoses` as explicit tests dependency (required by sari and wiki_split metrics).
Before, this library was installed as a third-party dependency, but this is no longer the case for Windows.
Fix #4341. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4342/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4341 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4341/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4341/comments | https://api.github.com/repos/huggingface/datasets/issues/4341/events | https://github.com/huggingface/datasets/issues/4341 | 1,234,739,703 | I_kwDODunzps5JmKH3 | 4,341 | Failing CI on Windows for sari and wiki_split metrics | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2022-05-13T04:55:17 | 2022-05-13T05:47:41 | 2022-05-13T05:47:41 | MEMBER | null | null | null | ## Describe the bug
Our CI is failing from yesterday on Windows for metrics: sari and wiki_split
```
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_sari - ...
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_wiki_split
```
See: https://app.circleci.com/pipelines/github/huggingface/datasets/11928/workflows/79daa5e7-65c9-4e85-829b-00d2bfbd076a/jobs/71594 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4341/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4340 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4340/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4340/comments | https://api.github.com/repos/huggingface/datasets/issues/4340/events | https://github.com/huggingface/datasets/pull/4340 | 1,234,671,025 | PR_kwDODunzps43wY1U | 4,340 | Fix irc_disentangle dataset script | {
"login": "i-am-pad",
"id": 32005017,
"node_id": "MDQ6VXNlcjMyMDA1MDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/32005017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/i-am-pad",
"html_url": "https://github.com/i-am-pad",
"followers_url": "https://api.github.com/users/i-am-pad/followers",
"following_url": "https://api.github.com/users/i-am-pad/following{/other_user}",
"gists_url": "https://api.github.com/users/i-am-pad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/i-am-pad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/i-am-pad/subscriptions",
"organizations_url": "https://api.github.com/users/i-am-pad/orgs",
"repos_url": "https://api.github.com/users/i-am-pad/repos",
"events_url": "https://api.github.com/users/i-am-pad/events{/privacy}",
"received_events_url": "https://api.github.com/users/i-am-pad/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks ! This has been fixed in https://github.com/huggingface/datasets/pull/4377, we can close this PR"
] | 2022-05-13T02:37:57 | 2022-05-24T15:37:30 | 2022-05-24T15:37:29 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4340",
"html_url": "https://github.com/huggingface/datasets/pull/4340",
"diff_url": "https://github.com/huggingface/datasets/pull/4340.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4340.patch",
"merged_at": null
} | updated extracted dataset's repo's latest commit hash (included in tarball's name), and updated the related data_infos.json | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4340/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4339 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4339/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4339/comments | https://api.github.com/repos/huggingface/datasets/issues/4339/events | https://github.com/huggingface/datasets/pull/4339 | 1,234,496,289 | PR_kwDODunzps43v0WT | 4,339 | Dataset loader for the MSLR2022 shared task | {
"login": "JohnGiorgi",
"id": 8917831,
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnGiorgi",
"html_url": "https://github.com/JohnGiorgi",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think the underlying issue is in https://github.com/huggingface/datasets/blob/c0ed6fdc29675b3565b01b77fde5ab5d9d8b60ec/src/datasets/commands/dummy_data.py#L124 - where `CSV`s are considered to be in the same class of file as text, jsonl, and tsv.\r\n\r\nI think this is an error because CSVs can have newlines within the rows of a file. I'm happy to make a PR to change how this handling works, or make the change within this PR. \r\n\r\nWe should figure out:\r\n1. Does this dummy data need to be generated more than once? (It looks like no)\r\n2. Should this be fixed generally? (needs a HF person to weigh in here)\r\n3. What is the right way for such a fix to exist permanently here; the [Contributing document](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md) doesn't provide guidance on any tests. Writing a test is several times more effort than fixing the underlying issue. (again needs a HF person)",
"Would someone from HF mind taking a look at this PR? (@lhoestq)",
"Hi ! Sorry for the delay in responding :)\r\n\r\nI don't think there's a big need to fix this in the general case for now, feel free to just generate the dummy data for this specific dataset :)\r\n\r\nThe `datasets-cli dummy_data datasets/mslr2022` command should tell you what dummy files to generate. In each dummy file you just need to include enough data to generate 4 or 5 examples",
"_The documentation is not available anymore as the PR was closed or merged._",
"Awesome! Generated the dummy data and the tests now pass. @jayded thanks for your help! If you and @lucylw are happy with this I think it's ready to be merged. @lhoestq this is ready for another look :)",
"Hi @lhoestq, is there anything blocking this from being merged that I can address?",
"Hi @JohnGiorgi ! Thanks for the changes, it looks all good now :)\r\n\r\nI think this dataset can be under the AllenAI page here: https://huggingface.co/allenai What do you think ?\r\nFeel free to create a new dataset repository on huggingface.co and upload your files (dataset script, readme, etc.)\r\n\r\nOnce the dataset is under the AllenAI org, we can close this PR\r\n",
"> Hi @JohnGiorgi ! Thanks for the changes, it looks all good now :)\r\n> \r\n> I think this dataset can be under the AllenAI page here: https://huggingface.co/allenai What do you think ? Feel free to create a new dataset repository on huggingface.co and upload your files (dataset script, readme, etc.)\r\n> \r\n> Once the dataset is under the AllenAI org, we can close this PR\r\n\r\nSweet! It is uploaded here: https://huggingface.co/datasets/allenai/mslr2022",
"Nice ! Thanks :)\r\n\r\nI think we can close this PR then.\r\n\r\nI noticed that the dataset preview is not available on this dataset, this is because we require datasets to work in streaming mode to show a preview. However TAR archives don't work well in streaming mode (you can't know in advance what files are inside a TAR archive without reading it completely). This can be fixed by using a ZIP archive instead.\r\n\r\nLet me know if you have questions or if I can help."
] | 2022-05-12T21:23:41 | 2022-07-18T17:19:27 | 2022-07-18T16:58:34 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4339",
"html_url": "https://github.com/huggingface/datasets/pull/4339",
"diff_url": "https://github.com/huggingface/datasets/pull/4339.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4339.patch",
"merged_at": null
} | This PR adds a dataset loader for the [MSLR2022 Shared Task](https://github.com/allenai/mslr-shared-task). Both the MS^2 and Cochrane datasets can be loaded with this dataloader:
```python
from datasets import load_dataset
ms2 = load_dataset("mslr2022", "ms2")
cochrane = load_dataset("mslr2022", "cochrane")
```
Usage looks like:
```python
>>> ms2 = load_dataset("mslr2022", "ms2", split="validation")
>>> ms2.keys()
dict_keys(['review_id', 'pmid', 'title', 'abstract', 'target', 'background', 'reviews_info'])
>>> ms2[0].target
'Conclusions SC therapy is effective for PAH in pre clinical studies .\nThese results may help to st and ardise pre clinical animal studies and provide a theoretical basis for clinical trial design in the future .'
```
I have tested this works with the following command:
```bash
datasets-cli test datasets/mslr2022 --save_infos --all_configs
```
However I have having a little trouble generating the dummy data
```bash
datasets-cli dummy_data datasets/mslr2022 --auto_generate
```
errors out with the following stack trace:
```
Couldn't generate dummy file 'datasets/mslr2022/dummy/ms2/1.0.0/dummy_data/mslr_data.tar.gz/mslr_data/ms2/convert_to_cochrane.py'. Ignore that if this file is not useful for dummy data.
Traceback (most recent call last):
File "/Users/johngiorgi/.pyenv/versions/datasets/bin/datasets-cli", line 11, in <module>
load_entry_point('datasets', 'console_scripts', 'datasets-cli')()
File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/commands/datasets_cli.py", line 39, in main
service.run()
File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/commands/dummy_data.py", line 319, in run
keep_uncompressed=self._keep_uncompressed,
File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/commands/dummy_data.py", line 361, in _autogenerate_dummy_data
dataset_builder._prepare_split(split_generator, check_duplicate_keys=False)
File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/builder.py", line 1146, in _prepare_split
desc=f"Generating {split_info.name} split",
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/Users/johngiorgi/.cache/huggingface/modules/datasets_modules/datasets/mslr2022/b4becd2f52cf18255d4934d7154c2a1127fb393371b87b3c1fc2c8b35a777cea/mslr2022.py", line 149, in _generate_examples
reviews_info_df = pd.read_csv(reviews_info_filepath, index_col=0)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/util/_decorators.py", line 311, in wrapper
return func(*args, **kwargs)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 586, in read_csv
return _read(filepath_or_buffer, kwds)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 488, in _read
return parser.read(nrows)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 1047, in read
index, columns, col_dict = self._engine.read(nrows)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 224, in read
chunks = self._reader.read_low_memory(nrows)
File "pandas/_libs/parsers.pyx", line 801, in pandas._libs.parsers.TextReader.read_low_memory
File "pandas/_libs/parsers.pyx", line 857, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 843, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 1925, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: EOF inside string starting at row 2
```
I think this may have to do with unusual line terminators in the original data. When I open it in VSCode, it complains:
```
The file 'dev-inputs.csv' contains one or more unusual line terminator characters, like Line Separator (LS) or Paragraph Separator (PS).
It is recommended to remove them from the file. This can be configured via `editor.unusualLineTerminators`.
```
Tagging the organizers of the shared task in case they want to sanity check this or add any info to the model card :) @lucylw @jayded
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4339/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4338 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4338/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4338/comments | https://api.github.com/repos/huggingface/datasets/issues/4338/events | https://github.com/huggingface/datasets/pull/4338 | 1,234,478,851 | PR_kwDODunzps43vwsm | 4,338 | Eval metadata Batch 4: Tweet Eval, Tweets Hate Speech Detection, VCTK, Weibo NER, Wisesight Sentiment, XSum, Yahoo Answers Topics, Yelp Polarity, Yelp Review Full | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Summary of CircleCI errors:\r\n\r\n- **XSum**: missing 6 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', and 'source_datasets'\r\n- **Yelp_polarity**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-12T21:02:08 | 2022-05-16T15:51:02 | 2022-05-16T15:42:59 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4338",
"html_url": "https://github.com/huggingface/datasets/pull/4338",
"diff_url": "https://github.com/huggingface/datasets/pull/4338.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4338.patch",
"merged_at": "2022-05-16T15:42:59"
} | Adding evaluation metadata for:
- Tweet Eval
- Tweets Hate Speech Detection
- VCTK
- Weibo NER
- Wisesight Sentiment
- XSum
- Yahoo Answers Topics
- Yelp Polarity
- Yelp Review Full | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4338/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4337 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4337/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4337/comments | https://api.github.com/repos/huggingface/datasets/issues/4337/events | https://github.com/huggingface/datasets/pull/4337 | 1,234,470,083 | PR_kwDODunzps43vuzF | 4,337 | Eval metadata batch 3: Reddit, Rotten Tomatoes, SemEval 2010, Sentiment 140, SMS Spam, Snips, SQuAD, SQuAD v2, Timit ASR | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Summary of CircleCI errors:\r\n\r\n- **sem_eval_2010_task_8**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **sms_spam**: `Data Instances` and`Data Splits` are empty.\r\n- **Quora** : Expected some content in section `Citation Information` but it is empty, missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **sentiment140**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n\r\nThere are also some timeout errors, I don't really understand the source though :confused: ",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-12T20:52:02 | 2022-05-16T16:26:19 | 2022-05-16T16:18:30 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4337",
"html_url": "https://github.com/huggingface/datasets/pull/4337",
"diff_url": "https://github.com/huggingface/datasets/pull/4337.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4337.patch",
"merged_at": "2022-05-16T16:18:30"
} | Adding evaluation metadata for:
- Reddit
- Rotten Tomatoes
- SemEval 2010
- Sentiment 140
- SMS Spam
- Snips
- SQuAD
- SQuAD v2
- Timit ASR | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4337/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4336 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4336/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4336/comments | https://api.github.com/repos/huggingface/datasets/issues/4336/events | https://github.com/huggingface/datasets/pull/4336 | 1,234,446,174 | PR_kwDODunzps43vpqG | 4,336 | Eval metadata batch 2 : Health Fact, Jigsaw Toxicity, LIAR, LJ Speech, MSRA NER, Multi News, NCBI Disease, Poem Sentiment | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Summary of CircleCI errors:\r\n- **Jjigsaw_toxicity_pred**: `Citation Information` but it is empty.\r\n- **LIAR** : `Data Instances`,`Data Fields`, `Data Splits`, `Citation Information` are empty.\r\n- **MSRA NER** : Dataset Summary`, `Data Instances`, `Data Fields`, `Data Splits`, `Citation Information` are empty.\r\n",
"The CI errors about missing content in the dataset cards can be ignored in this PR btw",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4336). All of your documentation changes will be reflected on that endpoint."
] | 2022-05-12T20:24:45 | 2022-05-16T16:25:00 | 2022-05-16T16:24:59 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4336",
"html_url": "https://github.com/huggingface/datasets/pull/4336",
"diff_url": "https://github.com/huggingface/datasets/pull/4336.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4336.patch",
"merged_at": "2022-05-16T16:24:59"
} | Adding evaluation metadata for :
- Health Fact
- Jigsaw Toxicity
- LIAR
- LJ Speech
- MSRA NER
- Multi News
- NCBI Diseas
- Poem Sentiment | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4336/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4335 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4335/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4335/comments | https://api.github.com/repos/huggingface/datasets/issues/4335/events | https://github.com/huggingface/datasets/pull/4335 | 1,234,157,123 | PR_kwDODunzps43usJP | 4,335 | Eval metadata batch 1: BillSum, CoNLL2003, CoNLLPP, CUAD, Emotion, GigaWord, GLUE, Hate Speech 18, Hate Speech | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Summary of CircleCI errors:\r\n- **BoolQ**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **Conllpp**: expected some content in section `Citation Information` but it is empty.\r\n- **GLUE**: 'annotations_creators', 'language_creators', 'source_datasets' :['unknown'] are not registered tags\r\n- **ConLL2003**: field 'task_ids': ['part-of-speech-tagging'] are not registered tags for 'task_ids'\r\n- **Hate_speech18:** Expected some content in section `Data Instances` but it is empty, Expected some content in section `Data Splits` but it is empty",
"And yes we can ignore all the CI errors related to missing content in the dataset cards, these issues can be fixed in other PRs",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-12T15:28:16 | 2022-05-16T16:31:10 | 2022-05-16T16:23:09 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4335",
"html_url": "https://github.com/huggingface/datasets/pull/4335",
"diff_url": "https://github.com/huggingface/datasets/pull/4335.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4335.patch",
"merged_at": "2022-05-16T16:23:08"
} | Adding evaluation metadata for:
- BillSum
- CoNLL2003
- CoNLLPP
- CUAD
- Emotion
- GigaWord
- GLUE
- Hate Speech 18
- Hate Speech Offensive | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4335/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4334 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4334/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4334/comments | https://api.github.com/repos/huggingface/datasets/issues/4334/events | https://github.com/huggingface/datasets/pull/4334 | 1,234,103,477 | PR_kwDODunzps43uguB | 4,334 | Adding eval metadata for billsum | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-05-12T14:49:08 | 2022-05-12T14:49:24 | 2022-05-12T14:49:24 | CONTRIBUTOR | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4334",
"html_url": "https://github.com/huggingface/datasets/pull/4334",
"diff_url": "https://github.com/huggingface/datasets/pull/4334.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4334.patch",
"merged_at": null
} | Adding eval metadata for billsum | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4334/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4334/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4333 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4333/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4333/comments | https://api.github.com/repos/huggingface/datasets/issues/4333/events | https://github.com/huggingface/datasets/pull/4333 | 1,234,038,705 | PR_kwDODunzps43uSuj | 4,333 | Adding eval metadata for Banking 77 | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq , Circle CI is giving me an error, saying that ['extended'] is a key that shouldn't be in the dataset metadata, but it was there before my modification (so I don't want to remove it)"
] | 2022-05-12T14:05:05 | 2022-05-12T21:03:32 | 2022-05-12T21:03:31 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4333",
"html_url": "https://github.com/huggingface/datasets/pull/4333",
"diff_url": "https://github.com/huggingface/datasets/pull/4333.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4333.patch",
"merged_at": "2022-05-12T21:03:31"
} | Adding eval metadata for Banking 77 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4333/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4333/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4332 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4332/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4332/comments | https://api.github.com/repos/huggingface/datasets/issues/4332/events | https://github.com/huggingface/datasets/pull/4332 | 1,234,021,188 | PR_kwDODunzps43uO8S | 4,332 | Adding eval metadata for arabic speech corpus | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-05-12T13:51:38 | 2022-05-12T21:03:21 | 2022-05-12T21:03:20 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4332",
"html_url": "https://github.com/huggingface/datasets/pull/4332",
"diff_url": "https://github.com/huggingface/datasets/pull/4332.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4332.patch",
"merged_at": "2022-05-12T21:03:20"
} | Adding eval metadata for arabic speech corpus | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4332/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4331 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4331/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4331/comments | https://api.github.com/repos/huggingface/datasets/issues/4331/events | https://github.com/huggingface/datasets/pull/4331 | 1,234,016,110 | PR_kwDODunzps43uN2R | 4,331 | Adding eval metadata to Amazon Polarity | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-05-12T13:47:59 | 2022-05-12T21:03:14 | 2022-05-12T21:03:13 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4331",
"html_url": "https://github.com/huggingface/datasets/pull/4331",
"diff_url": "https://github.com/huggingface/datasets/pull/4331.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4331.patch",
"merged_at": "2022-05-12T21:03:13"
} | Adding eval metadata to Amazon Polarity | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4331/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4330 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4330/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4330/comments | https://api.github.com/repos/huggingface/datasets/issues/4330/events | https://github.com/huggingface/datasets/pull/4330 | 1,233,992,681 | PR_kwDODunzps43uIwm | 4,330 | Adding eval metadata to Allociné dataset | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-05-12T13:31:39 | 2022-05-12T21:03:05 | 2022-05-12T21:03:05 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4330",
"html_url": "https://github.com/huggingface/datasets/pull/4330",
"diff_url": "https://github.com/huggingface/datasets/pull/4330.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4330.patch",
"merged_at": "2022-05-12T21:03:05"
} | Adding eval metadata to Allociné dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4330/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4330/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4329 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4329/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4329/comments | https://api.github.com/repos/huggingface/datasets/issues/4329/events | https://github.com/huggingface/datasets/pull/4329 | 1,233,991,207 | PR_kwDODunzps43uIcF | 4,329 | Adding eval metadata for AG News | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-05-12T13:30:32 | 2022-05-12T21:02:41 | 2022-05-12T21:02:40 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4329",
"html_url": "https://github.com/huggingface/datasets/pull/4329",
"diff_url": "https://github.com/huggingface/datasets/pull/4329.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4329.patch",
"merged_at": "2022-05-12T21:02:40"
} | Adding eval metadata for AG News | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4329/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4328 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4328/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4328/comments | https://api.github.com/repos/huggingface/datasets/issues/4328/events | https://github.com/huggingface/datasets/pull/4328 | 1,233,856,690 | PR_kwDODunzps43trrd | 4,328 | Fix and clean Apache Beam functionality | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-12T11:41:07 | 2022-05-24T13:43:11 | 2022-05-24T13:34:32 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4328",
"html_url": "https://github.com/huggingface/datasets/pull/4328",
"diff_url": "https://github.com/huggingface/datasets/pull/4328.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4328.patch",
"merged_at": "2022-05-24T13:34:32"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4328/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4327 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4327/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4327/comments | https://api.github.com/repos/huggingface/datasets/issues/4327/events | https://github.com/huggingface/datasets/issues/4327 | 1,233,840,020 | I_kwDODunzps5JiueU | 4,327 | `wikipedia` pre-processed datasets | {
"login": "vpj",
"id": 81152,
"node_id": "MDQ6VXNlcjgxMTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/81152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vpj",
"html_url": "https://github.com/vpj",
"followers_url": "https://api.github.com/users/vpj/followers",
"following_url": "https://api.github.com/users/vpj/following{/other_user}",
"gists_url": "https://api.github.com/users/vpj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vpj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vpj/subscriptions",
"organizations_url": "https://api.github.com/users/vpj/orgs",
"repos_url": "https://api.github.com/users/vpj/repos",
"events_url": "https://api.github.com/users/vpj/events{/privacy}",
"received_events_url": "https://api.github.com/users/vpj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @vpj, thanks for reporting.\r\n\r\nI'm sorry, but I can't reproduce your bug: I load \"20220301.simple\"in 9 seconds:\r\n```shell\r\ntime python -c \"from datasets import load_dataset; load_dataset('wikipedia', '20220301.simple')\"\r\n\r\nDownloading and preparing dataset wikipedia/20220301.simple (download: 228.58 MiB, generated: 224.18 MiB, post-processed: Unknown size, total: 452.76 MiB) to .../.cache/huggingface/datasets/wikipedia/20220301.simple/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559...\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.66k/1.66k [00:00<00:00, 1.02MB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 235M/235M [00:02<00:00, 82.8MB/s]\r\nDataset wikipedia downloaded and prepared to .../.cache/huggingface/datasets/wikipedia/20220301.simple/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 290.75it/s]\r\n\r\nreal\t0m9.693s\r\nuser\t0m6.002s\r\nsys\t0m3.260s\r\n```\r\n\r\nCould you please check your environment info, as requested when opening this issue?\r\n```\r\n## Environment info\r\n<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->\r\n- `datasets` version:\r\n- Platform:\r\n- Python version:\r\n- PyArrow version:\r\n```\r\nMaybe you are using an old version of `datasets`...",
"Downloading and processing `wikipedia simple` dataset completed in under 11sec on M1 Mac. Could you please check `dataset` version as mentioned by @albertvillanova? Also check system specs, if system is under load processing could take some time I guess."
] | 2022-05-12T11:25:42 | 2022-08-31T08:26:57 | 2022-08-31T08:26:57 | NONE | null | null | null | ## Describe the bug
[Wikipedia](https://huggingface.co/datasets/wikipedia) dataset readme says that certain subsets are preprocessed. However it seems like they are not available. When I try to load them it takes a really long time, and it seems like it's processing them.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("wikipedia", "20220301.en")
```
## Expected results
To load the dataset
## Actual results
Takes a very long time to load (after downloading)
After `Downloading data files: 100%`. It takes hours and gets killed.
Tried `wikipedia.simple` and it got processed after ~30mins. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4327/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4326 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4326/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4326/comments | https://api.github.com/repos/huggingface/datasets/issues/4326/events | https://github.com/huggingface/datasets/pull/4326 | 1,233,818,489 | PR_kwDODunzps43tjWy | 4,326 | Fix type hint and documentation for `new_fingerprint` | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-12T11:05:08 | 2022-06-01T13:04:45 | 2022-06-01T12:56:18 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4326",
"html_url": "https://github.com/huggingface/datasets/pull/4326",
"diff_url": "https://github.com/huggingface/datasets/pull/4326.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4326.patch",
"merged_at": "2022-06-01T12:56:18"
} | Currently, there are no type hints nor `Optional` for the argument `new_fingerprint` in several methods of `datasets.arrow_dataset.Dataset`.
There was some documentation missing as well.
Note that pylance is happy with the type hints, but pyright does not detect that `new_fingerprint` is set within the decorator.
The modifications in this PR are fine since here https://github.com/huggingface/datasets/blob/aa743886221d76afb409d263e1b136e7a71fe2b4/src/datasets/fingerprint.py#L446-L454
for the non-inplace case we make sure to auto-generate a new fingerprint (as indicated in the doc). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4326/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4325 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4325/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4325/comments | https://api.github.com/repos/huggingface/datasets/issues/4325/events | https://github.com/huggingface/datasets/issues/4325 | 1,233,812,191 | I_kwDODunzps5Jinrf | 4,325 | Dataset Viewer issue for strombergnlp/offenseval_2020, strombergnlp/polstance | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Not sure if it's related... I was going to raise an issue for https://huggingface.co/datasets/domenicrosati/TruthfulQA which also has the same issue... https://huggingface.co/datasets/domenicrosati/TruthfulQA/viewer/domenicrosati--TruthfulQA/train \r\n\r\n",
"Yes, it's related. The backend behind the dataset viewer is currently under too much load, and these datasets are still in the jobs queue. We're actively working on this issue, and we expect to fix the issue permanently soon. Thanks for your patience 🙏 ",
"Thanks @severo and no worries! - a suggestion for a UI usability thing maybe is to indicate that the dataset processing is in the job queue (rather than no data?)",
"Thanks, these are working great now (including @domenicrosati 's, afaics!)"
] | 2022-05-12T10:59:08 | 2022-05-13T10:57:15 | 2022-05-13T10:57:02 | CONTRIBUTOR | null | null | null | ### Link
https://huggingface.co/datasets/strombergnlp/offenseval_2020/viewer/ar/train
### Description
The viewer isn't running for these two datasets. I left it overnight because a wait sometimes helps things get loaded, and the error messages have all gone, but the datasets are still turning up blank in viewer. Maybe it needs a bit more time.
* https://huggingface.co/datasets/strombergnlp/polstance/viewer/PolStance/train
* https://huggingface.co/datasets/strombergnlp/offenseval_2020/viewer/ar/train
While offenseval_2020 is gated w. prompt, the other gated previews I have run fine in Viewer, e.g. https://huggingface.co/datasets/strombergnlp/shaj , so I'm a bit stumped!
### Owner
Yes | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4325/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4325/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4324 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4324/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4324/comments | https://api.github.com/repos/huggingface/datasets/issues/4324/events | https://github.com/huggingface/datasets/issues/4324 | 1,233,780,870 | I_kwDODunzps5JigCG | 4,324 | Support >1 PWC dataset per dataset card | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi @leondz, I agree it would be nice. We'll see what we can do ;)"
] | 2022-05-12T10:29:07 | 2022-05-13T11:25:29 | null | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
Some datasets cover more than one dataset on PapersWithCode. For example, the OffensEval 2020 challenge involved five languages, and there's one dataset to cover all five datasets, [`strombergnlp/offenseval_2020`](https://huggingface.co/datasets/strombergnlp/offenseval_2020). However, the yaml `paperswithcode_id:` dataset card entry only supports one value; when multiple are added, the PWC link disappears from the dataset page.
Because the link from a PapersWithCode dataset to a Hugging Face Hub entry can't be entered manually and seems to be scraped, this means end users don't have a way of getting a dataset reader link to appear on all the PWC datasets supported by one HF Hub Dataset reader.
It's not super unusual to have papers introduce multiple parallel variants of a dataset and would be handy to reflect this, so e.g. dataset maintainers can DRY, and so dataset users can keep what they're doing simple.
**Describe the solution you'd like**
I'd like `paperswithcode_id:` to support lists and be able to connect with multiple PWC datasets.
**Describe alternatives you've considered**
De-normalising the datasets on HF Hub to create multiple readers for each variation on a task, i.e. instead of a single `offenseval_2020`, having `offenseval_2020_ar`, `offenseval_2020_da`, `offenseval_2020_gr`, ...
**Additional context**
Hope that's enough
**Priority**
Low | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4324/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4324/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4323 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4323/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4323/comments | https://api.github.com/repos/huggingface/datasets/issues/4323/events | https://github.com/huggingface/datasets/issues/4323 | 1,233,634,928 | I_kwDODunzps5Jh8Zw | 4,323 | Audio can not find value["bytes"] | {
"login": "YooSungHyun",
"id": 34292279,
"node_id": "MDQ6VXNlcjM0MjkyMjc5",
"avatar_url": "https://avatars.githubusercontent.com/u/34292279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YooSungHyun",
"html_url": "https://github.com/YooSungHyun",
"followers_url": "https://api.github.com/users/YooSungHyun/followers",
"following_url": "https://api.github.com/users/YooSungHyun/following{/other_user}",
"gists_url": "https://api.github.com/users/YooSungHyun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YooSungHyun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YooSungHyun/subscriptions",
"organizations_url": "https://api.github.com/users/YooSungHyun/orgs",
"repos_url": "https://api.github.com/users/YooSungHyun/repos",
"events_url": "https://api.github.com/users/YooSungHyun/events{/privacy}",
"received_events_url": "https://api.github.com/users/YooSungHyun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"![image](https://user-images.githubusercontent.com/34292279/168063684-fff5c12a-8b1e-4c65-b18b-36100ab8a1af.png)\r\n\r\nthat is reason my bytes`s empty\r\nbut i have some confused why path prior is higher than bytes?\r\n\r\nif you can make bytes in _generate_examples , you don`t have to make bytes to path?\r\nbecause we have path and bytes already",
"> but i have some confused why path prior is higher than bytes?\r\n\r\nIf the audio file is already available locally, we don't need to store the bytes again.\r\n\r\nIf you don't specify a \"path\" to a local file, then the bytes are stored. You can set \"path\" to None for example.\r\n\r\n> if you can make bytes in _generate_examples , you don`t have to make bytes to path?\r\n> because we have path and bytes already\r\n\r\nIt's useful to pass both \"path\" and \"bytes\" in `_generate_examples`:\r\n- when the dataset has been downloaded, then the \"path\" to the audio files are stored and we can ignore \"bytes\" in order to save disk space.\r\n- when the dataset is loaded in streaming mode, the audio files are not available on your disk and therefore we use the \"bytes\" ",
"@lhoestq \r\nFirst of all, thx for reply\r\n\r\nbut, if i put in \"bytes\" and \"path\"\r\nex) {\"bytes\":\"blah blah~\", \"path\":\"blah blah~\"}\r\n\r\nthat source working that my bytes to empty first,\r\nand then, re-calculate my bytes!\r\n![image](https://user-images.githubusercontent.com/34292279/168534687-1fb60d8c-d369-47d2-a4bb-db68f95194b4.png)\r\n\r\nif you have some pcm file, pcm is can read bytes.\r\nso, i put in bytes and paths.\r\nbut bytes is been None why encode_example func make None\r\nand then, on decode_example func, we no have bytes. so, calculate bytes to path.\r\npcm is not support librosa or soundfile, error occured!\r\n\r\nthe most important thing is not announced anywhere this situation can be reproduced\r\n\r\nis that truly right process flow?",
"I don't think we support PCM files, feel free to convert your data to WAV for now.\r\n\r\nIt would be awesome to support PCM files though, let me know if you'd like to contribute this feature, I'd be happy to help",
"@lhoestq oh, how can i contribute?",
"You can clone the repository (see the guide on [how to contribute](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-create-a-pull-request)) and see how we can make the `Image.encode_example` method work with PCM data.\r\n\r\nThere might be other ways to approach this problem, but here is what I think is a reasonable one:\r\n\r\nI think `Image.encode_example` should be able to take PCM bytes as input and the sampling rate, and return the WAV bytes (built by combining the PCM bytes and the sampling rate info), so that `Image.decode_example` can read it.\r\n\r\nTo check if the input bytes are PCM data, you can just check if the extension of the `path` is \".pcm\".\r\n",
"maybe i can start to contribute on this sunday!\r\n@lhoestq ",
"@lhoestq plz check my pr #4409 \r\n\r\nam i wrong somting?",
"Thanks, I reviewed your PR :)"
] | 2022-05-12T08:31:58 | 2022-07-07T13:16:08 | 2022-07-07T13:16:08 | CONTRIBUTOR | null | null | null | ## Describe the bug
I wrote down _generate_examples like:
![image](https://user-images.githubusercontent.com/34292279/168027186-2fe8b255-2cd8-4b9b-ab1e-8d5a7182979b.png)
but where is the bytes?
![image](https://user-images.githubusercontent.com/34292279/168027330-f2496dd0-1d99-464c-b15c-bc57eee0415a.png)
## Expected results
value["bytes"] is not None, so i can make datasets with bytes, not path
## bytes looks like:
blah blah~~
\xfe\x03\x00\xfb\x06\x1c\x0bo\x074\x03\xaf\x01\x13\x04\xbc\x06\x8c\x05y\x05,\t7\x08\xaf\x03\xc0\xfe\xe8\xfc\x94\xfe\xb7\xfd\xea\xfa\xd5\xf9$\xf9>\xf9\x1f\xf8\r\xf5F\xf49\xf4\xda\xf5-\xf8\n\xf8k\xf8\x07\xfb\x18\xfd\xd9\xfdv\xfd"\xfe\xcc\x01\x1c\x04\x08\x04@\x04{\x06^\tf\t\x1e\x07\x8b\x06\x02\x08\x13\t\x07\x08 \x06g\x06"\x06\xa0\x03\xc6\x002\xff \xff\x1d\xff\x19\xfd?\xfb\xdb\xfa\xfc\xfa$\xfb}\xf9\xe5\xf7\xf9\xf7\xce\xf8.\xf9b\xf9\xc5\xf9\xc0\xfb\xfa\xfcP\xfc\xba\xfbQ\xfc1\xfe\x9f\xff\x12\x00\xa2\x00\x18\x02Z\x03\x02\x04\xb1\x03\xc5\x03W\x04\x82\x04\x8f\x04U\x04\xb6\x04\x10\x05{\x04\x83\x02\x17\x01\x1d\x00\xa0\xff\xec\xfe\x03\xfe#\xfe\xc2\xfe2\xff\xe6\xfe\x9a\xfe~\x01\x91\x08\xb3\tU\x05\x10\x024\x02\xe4\x05\xa8\x07\xa7\x053\x07I\n\x91\x07v\x02\x95\xfd\xbb\xfd\x96\xff\x01\xfe\x1e\xfb\xbb\xf9S\xf8!\xf8\xf4\xf5\xd6\xf3\xf7\xf3l\xf4d\xf6l\xf7d\xf6b\xf7\xc1\xfa(\xfd\xcf\xfd*\xfdq\xfe\xe9\x01\xa8\x03t\x03\x17\x04B\x07\xce\t\t\t\xeb\x06\x0c\x07\x95\x08\x92\t\xbc\x07O\x06\xfb\x06\xd2\x06U\x04\x00\x02\x92\x00\xdc\x00\x84\x00 \xfeT\xfc\xf1\xfb\x82\xfc\x97\xfb}\xf9\x00\xf8_\xf8\x0b\xf9\xe5\xf8\xe2\xf7\xaa\xf8\xb2\xfa\x10\xfbl\xfa\xf5\xf9Y\xfb\xc0\xfd\xe8\xfe\xec\xfe1\x00\xad\x01\xec\x02E\x03\x13\x03\x9b\x03o\x04\xce\x04\xa8\x04\xb2\x04\x1b\x05\xc0\x05\xd2\x04\xe8\x02z\x01\xbe\x00\xae\x00\x07\x00$\xff|\xff\x8e\x00\x13\x00\x10\xff\x98\xff0\x05{\x0b\x05\t\xaa\x03\x82\x01n\x03
blah blah~~
that function not return None
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:2.2.1
- Platform:ubuntu 18.04
- Python version:3.6.9
- PyArrow version:6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4323/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4322 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4322/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4322/comments | https://api.github.com/repos/huggingface/datasets/issues/4322/events | https://github.com/huggingface/datasets/pull/4322 | 1,233,596,947 | PR_kwDODunzps43s1wy | 4,322 | Added stratify option to train_test_split function. | {
"login": "nandwalritik",
"id": 48522685,
"node_id": "MDQ6VXNlcjQ4NTIyNjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/48522685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nandwalritik",
"html_url": "https://github.com/nandwalritik",
"followers_url": "https://api.github.com/users/nandwalritik/followers",
"following_url": "https://api.github.com/users/nandwalritik/following{/other_user}",
"gists_url": "https://api.github.com/users/nandwalritik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nandwalritik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nandwalritik/subscriptions",
"organizations_url": "https://api.github.com/users/nandwalritik/orgs",
"repos_url": "https://api.github.com/users/nandwalritik/repos",
"events_url": "https://api.github.com/users/nandwalritik/events{/privacy}",
"received_events_url": "https://api.github.com/users/nandwalritik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> Nice thank you ! This will be super useful :)\r\n> \r\n> Could you also add some tests in test_arrow_dataset.py and add an example of usage in the `Example:` section of the `train_test_split` docstring ?\r\n\r\nI will try to do it, is there any documentation for adding test cases? I have never done it before.",
"Thanks for the changes !\r\n\r\n> I will try to do it, is there any documentation for adding test cases? I have never done it before.\r\n\r\nYou can just add a function `test_train_test_split_startify` in `test_arrow_dataset.py`.\r\n\r\nIn this function you can define a dataset and make sure that `train_test_split` with the `stratify` argument works as expected.\r\n\r\nYou can do `pytest tests/test_arrow_dataset.py::test_train_test_split_startify` to run your test.\r\n\r\nFeel free to get some inspiration from other tests like `test_interleave_datasets` for example",
"I have added tests for stratified train_test_split in `test_arrow_dataset.py` file inside `test_train_test_split_startify` function. I have also added example usage with `stratify` arg in `Example:` section of the `train_test_split` docstring.\r\nResults of tests:\r\n```\r\n(data) nandwalritik@hp:~/datasets$ pytest tests/test_arrow_dataset.py::test_train_test_split_startify -W ignore\r\n============================================================================ test session starts ============================================================================\r\nplatform linux -- Python 3.9.5, pytest-7.1.2, pluggy-1.0.0\r\nrootdir: /home/nandwalritik/datasets\r\nplugins: datadir-1.3.1, forked-1.4.0, xdist-2.5.0\r\ncollected 1 item \r\n\r\ntests/test_arrow_dataset.py . [100%]\r\n\r\n============================================================================= 1 passed in 0.12s =============================================================================\r\n\r\n```",
"Thanks a lot !\r\n\r\n`utils/stratify.py` sounds good yes :)\r\n\r\nAlso feel free to merge `master` into your branch to fix the CI ;)",
"Added all the changes as were suggested and rebased with `main`.",
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi, I encounter an error when I try to specify the stratify_by_column. However, I have a columns which specific the label of the row as a string. But an error showed when I try to do it. \"ValueError: Stratifying by column is only supported for ClassLabel column, and column code is Value.\".",
"Hi @Damon03 , you can change the type of your column to ClassLabel using\r\n```python\r\nds = ds.class_encode_column(column_name)\r\n```\r\nthen you'll be free to use `stratify` :)",
"Thank you so much. It worked."
] | 2022-05-12T08:00:31 | 2022-11-22T14:53:55 | 2022-05-25T20:43:51 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4322",
"html_url": "https://github.com/huggingface/datasets/pull/4322",
"diff_url": "https://github.com/huggingface/datasets/pull/4322.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4322.patch",
"merged_at": "2022-05-25T20:43:51"
} | This PR adds `stratify` option to `train_test_split` method. I took reference from scikit-learn's `StratifiedShuffleSplit` class for implementing stratified split and integrated the changes as were suggested by @lhoestq.
It fixes #3452.
@lhoestq Please review and let me know, if any changes are required.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4322/reactions",
"total_count": 5,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4322/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4321 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4321/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4321/comments | https://api.github.com/repos/huggingface/datasets/issues/4321/events | https://github.com/huggingface/datasets/pull/4321 | 1,233,273,351 | PR_kwDODunzps43ryW7 | 4,321 | Adding dataset enwik8 | {
"login": "HallerPatrick",
"id": 22773355,
"node_id": "MDQ6VXNlcjIyNzczMzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/22773355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HallerPatrick",
"html_url": "https://github.com/HallerPatrick",
"followers_url": "https://api.github.com/users/HallerPatrick/followers",
"following_url": "https://api.github.com/users/HallerPatrick/following{/other_user}",
"gists_url": "https://api.github.com/users/HallerPatrick/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HallerPatrick/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HallerPatrick/subscriptions",
"organizations_url": "https://api.github.com/users/HallerPatrick/orgs",
"repos_url": "https://api.github.com/users/HallerPatrick/repos",
"events_url": "https://api.github.com/users/HallerPatrick/events{/privacy}",
"received_events_url": "https://api.github.com/users/HallerPatrick/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq Thank you for the great feedback! Looks like all tests are passing now :)",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-11T23:25:02 | 2022-06-01T14:27:30 | 2022-06-01T14:04:06 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4321",
"html_url": "https://github.com/huggingface/datasets/pull/4321",
"diff_url": "https://github.com/huggingface/datasets/pull/4321.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4321.patch",
"merged_at": "2022-06-01T14:04:06"
} | Because I regularly work with enwik8, I would like to contribute the dataset loader 🤗 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4321/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4321/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4320 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4320/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4320/comments | https://api.github.com/repos/huggingface/datasets/issues/4320/events | https://github.com/huggingface/datasets/issues/4320 | 1,233,208,864 | I_kwDODunzps5JgUYg | 4,320 | Multi-news dataset loader attempts to strip wrong character from beginning of summaries | {
"login": "JohnGiorgi",
"id": 8917831,
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnGiorgi",
"html_url": "https://github.com/JohnGiorgi",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting :)\r\n\r\nThis dataset was simply converted from [tensorflow datasets](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/summarization/multi_news.py)\r\n\r\nI think we can just remove the `.strip(\"- \")` and keep this character",
"Cool! I made a PR."
] | 2022-05-11T21:36:41 | 2022-05-16T13:52:10 | 2022-05-16T13:52:10 | CONTRIBUTOR | null | null | null | ## Describe the bug
The `multi_news.py` data loader has [a line which attempts to strip `"- "` from the beginning of summaries](https://github.com/huggingface/datasets/blob/aa743886221d76afb409d263e1b136e7a71fe2b4/datasets/multi_news/multi_news.py#L97). The actual character in the multi-news dataset, however, is `"– "`, which is different, e.g. `"– " != "- "`.
I would have just opened a PR to fix the mistake, but I am wondering what the motivation for stripping this character is? AFAICT most approaches just leave it in, e.g. the current SOTA on this dataset, [PRIMERA](https://huggingface.co/allenai/PRIMERA-multinews) (you can see its in the generated summaries of the model in their [example notebook](https://github.com/allenai/PRIMER/blob/main/Evaluation_Example.ipynb)).
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4320/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4319 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4319/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4319/comments | https://api.github.com/repos/huggingface/datasets/issues/4319/events | https://github.com/huggingface/datasets/pull/4319 | 1,232,982,023 | PR_kwDODunzps43q0UY | 4,319 | Adding eval metadata for ade v2 | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-11T17:36:20 | 2022-05-12T13:29:51 | 2022-05-12T13:22:19 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4319",
"html_url": "https://github.com/huggingface/datasets/pull/4319",
"diff_url": "https://github.com/huggingface/datasets/pull/4319.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4319.patch",
"merged_at": "2022-05-12T13:22:19"
} | Adding metadata to allow evaluation | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4319/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4318 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4318/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4318/comments | https://api.github.com/repos/huggingface/datasets/issues/4318/events | https://github.com/huggingface/datasets/pull/4318 | 1,232,905,488 | PR_kwDODunzps43qkkQ | 4,318 | Don't check f.loc in _get_extraction_protocol_with_magic_number | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-11T16:27:09 | 2022-05-11T16:57:02 | 2022-05-11T16:46:31 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4318",
"html_url": "https://github.com/huggingface/datasets/pull/4318",
"diff_url": "https://github.com/huggingface/datasets/pull/4318.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4318.patch",
"merged_at": "2022-05-11T16:46:31"
} | `f.loc` doesn't always exist for file-like objects in python. I removed it since it was not necessary anyway (we always seek the file to 0 after reading the magic number)
Fix https://github.com/huggingface/datasets/issues/4310 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4318/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4318/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4317 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4317/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4317/comments | https://api.github.com/repos/huggingface/datasets/issues/4317/events | https://github.com/huggingface/datasets/pull/4317 | 1,232,737,401 | PR_kwDODunzps43qBzh | 4,317 | Fix cnn_dailymail (dm stories were ignored) | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-11T14:25:25 | 2022-05-11T16:00:09 | 2022-05-11T15:52:37 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4317",
"html_url": "https://github.com/huggingface/datasets/pull/4317",
"diff_url": "https://github.com/huggingface/datasets/pull/4317.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4317.patch",
"merged_at": "2022-05-11T15:52:37"
} | https://github.com/huggingface/datasets/pull/4188 introduced a bug in `datasets` 2.2.0: DailyMail stories are ignored when generating the dataset.
I fixed that, and removed the google drive link (it has annoying quota limitations issues)
We can do a patch release after this is merged | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4317/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4316 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4316/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4316/comments | https://api.github.com/repos/huggingface/datasets/issues/4316/events | https://github.com/huggingface/datasets/pull/4316 | 1,232,681,207 | PR_kwDODunzps43p1Za | 4,316 | Support passing config_kwargs to CLI run_beam | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-11T13:53:37 | 2022-05-11T14:36:49 | 2022-05-11T14:28:31 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4316",
"html_url": "https://github.com/huggingface/datasets/pull/4316",
"diff_url": "https://github.com/huggingface/datasets/pull/4316.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4316.patch",
"merged_at": "2022-05-11T14:28:31"
} | This PR supports passing `config_kwargs` to CLI run_beam, so that for example for "wikipedia" dataset, we can pass:
```
--date 20220501 --language ca
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4316/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4316/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4315 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4315/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4315/comments | https://api.github.com/repos/huggingface/datasets/issues/4315/events | https://github.com/huggingface/datasets/pull/4315 | 1,232,549,330 | PR_kwDODunzps43pZ6p | 4,315 | Fix CLI run_beam namespace | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-11T12:21:00 | 2022-05-11T13:13:00 | 2022-05-11T13:05:08 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4315",
"html_url": "https://github.com/huggingface/datasets/pull/4315",
"diff_url": "https://github.com/huggingface/datasets/pull/4315.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4315.patch",
"merged_at": "2022-05-11T13:05:08"
} | Currently, it raises TypeError:
```
TypeError: __init__() got an unexpected keyword argument 'namespace'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4315/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4314 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4314/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4314/comments | https://api.github.com/repos/huggingface/datasets/issues/4314/events | https://github.com/huggingface/datasets/pull/4314 | 1,232,326,726 | PR_kwDODunzps43oqXD | 4,314 | Catch pull error when mirroring | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-11T09:38:35 | 2022-05-11T12:54:07 | 2022-05-11T12:46:42 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4314",
"html_url": "https://github.com/huggingface/datasets/pull/4314",
"diff_url": "https://github.com/huggingface/datasets/pull/4314.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4314.patch",
"merged_at": "2022-05-11T12:46:42"
} | Catch pull errors when mirroring so that the script continues to update the other datasets.
The error will still be printed at the end of the job. In this case the job also fails, and asks to manually update the datasets that failed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4314/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4313 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4313/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4313/comments | https://api.github.com/repos/huggingface/datasets/issues/4313/events | https://github.com/huggingface/datasets/pull/4313 | 1,231,764,100 | PR_kwDODunzps43m4qB | 4,313 | Add API code examples for Builder classes | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-10T22:22:32 | 2022-05-12T17:02:43 | 2022-05-12T12:36:57 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4313",
"html_url": "https://github.com/huggingface/datasets/pull/4313",
"diff_url": "https://github.com/huggingface/datasets/pull/4313.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4313.patch",
"merged_at": "2022-05-12T12:36:57"
} | This PR adds API code examples for the Builder classes. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4313/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4312 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4312/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4312/comments | https://api.github.com/repos/huggingface/datasets/issues/4312/events | https://github.com/huggingface/datasets/pull/4312 | 1,231,662,775 | PR_kwDODunzps43mlug | 4,312 | added TR-News dataset | {
"login": "batubayk",
"id": 25901065,
"node_id": "MDQ6VXNlcjI1OTAxMDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/25901065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/batubayk",
"html_url": "https://github.com/batubayk",
"followers_url": "https://api.github.com/users/batubayk/followers",
"following_url": "https://api.github.com/users/batubayk/following{/other_user}",
"gists_url": "https://api.github.com/users/batubayk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/batubayk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/batubayk/subscriptions",
"organizations_url": "https://api.github.com/users/batubayk/orgs",
"repos_url": "https://api.github.com/users/batubayk/repos",
"events_url": "https://api.github.com/users/batubayk/events{/privacy}",
"received_events_url": "https://api.github.com/users/batubayk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"Thanks for your contribution, @batubayk.\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nI would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] | 2022-05-10T20:33:00 | 2022-10-03T09:36:45 | 2022-10-03T09:36:45 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4312",
"html_url": "https://github.com/huggingface/datasets/pull/4312",
"diff_url": "https://github.com/huggingface/datasets/pull/4312.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4312.patch",
"merged_at": null
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4312/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4312/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4311 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4311/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4311/comments | https://api.github.com/repos/huggingface/datasets/issues/4311/events | https://github.com/huggingface/datasets/pull/4311 | 1,231,369,438 | PR_kwDODunzps43ln8- | 4,311 | [Imagefolder] Docs + Don't infer labels from file names when there are metadata + Error messages when metadata and images aren't linked correctly | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Merging this one since mario is off, I took care of adding some tests to make sure everything is fine. Will do the release after it"
] | 2022-05-10T15:52:15 | 2022-05-10T17:19:42 | 2022-05-10T17:11:47 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4311",
"html_url": "https://github.com/huggingface/datasets/pull/4311",
"diff_url": "https://github.com/huggingface/datasets/pull/4311.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4311.patch",
"merged_at": "2022-05-10T17:11:47"
} | I updated the `docs/source/image_process.mdx` documentation and added an example for image captioning and object detection using `ImageFolder`.
While doing so I also improved a few aspects:
- we don't need to infer labels from file names when there are metadata - they can just be in the metadata if necessary
- raise informative error messages when metadata and images aren't linked correctly:
- when an image is missing a metadata file
- when a metadata file is missing an image
I added some tests for these changes as well
cc @mariosasko | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4311/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4310 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4310/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4310/comments | https://api.github.com/repos/huggingface/datasets/issues/4310/events | https://github.com/huggingface/datasets/issues/4310 | 1,231,319,815 | I_kwDODunzps5JZHMH | 4,310 | Loading dataset with streaming: '_io.BufferedReader' object has no attribute 'loc' | {
"login": "milmin",
"id": 72745467,
"node_id": "MDQ6VXNlcjcyNzQ1NDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/72745467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/milmin",
"html_url": "https://github.com/milmin",
"followers_url": "https://api.github.com/users/milmin/followers",
"following_url": "https://api.github.com/users/milmin/following{/other_user}",
"gists_url": "https://api.github.com/users/milmin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/milmin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/milmin/subscriptions",
"organizations_url": "https://api.github.com/users/milmin/orgs",
"repos_url": "https://api.github.com/users/milmin/repos",
"events_url": "https://api.github.com/users/milmin/events{/privacy}",
"received_events_url": "https://api.github.com/users/milmin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2022-05-10T15:12:53 | 2022-05-11T16:46:31 | 2022-05-11T16:46:31 | NONE | null | null | null | ## Describe the bug
Loading a datasets with `load_dataset` and `streaming=True` returns `AttributeError: '_io.BufferedReader' object has no attribute 'loc'`. Notice that loading with `streaming=False` works fine.
In the following steps we load parquet files but the same happens with pickle files. The problem seems to come from `fsspec` lib, I put in the environment info also `s3fs` and `fsspec` versions since I'm loading from an s3 bucket.
## Steps to reproduce the bug
```python
from datasets import load_dataset
# path is the path to parquet files
data_files = {"train": path + "meta_train.parquet.gzip", "test": path + "meta_test.parquet.gzip"}
dataset = load_dataset("parquet", data_files=data_files, streaming=True)
```
## Expected results
A dataset object `datasets.dataset_dict.DatasetDict`
## Actual results
```
AttributeError Traceback (most recent call last)
<command-562086> in <module>
11
12 data_files = {"train": path + "meta_train.parquet.gzip", "test": path + "meta_test.parquet.gzip"}
---> 13 dataset = load_dataset("parquet", data_files=data_files, streaming=True)
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1679 if streaming:
1680 extend_dataset_builder_for_streaming(builder_instance, use_auth_token=use_auth_token)
-> 1681 return builder_instance.as_streaming_dataset(
1682 split=split,
1683 use_auth_token=use_auth_token,
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/builder.py in as_streaming_dataset(self, split, base_path, use_auth_token)
904 )
905 self._check_manual_download(dl_manager)
--> 906 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
907 # By default, return all splits
908 if split is None:
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/packaged_modules/parquet/parquet.py in _split_generators(self, dl_manager)
30 if not self.config.data_files:
31 raise ValueError(f"At least one data file must be specified, but got data_files={self.config.data_files}")
---> 32 data_files = dl_manager.download_and_extract(self.config.data_files)
33 if isinstance(data_files, (str, list, tuple)):
34 files = data_files
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in download_and_extract(self, url_or_urls)
798
799 def download_and_extract(self, url_or_urls):
--> 800 return self.extract(self.download(url_or_urls))
801
802 def iter_archive(self, urlpath_or_buf: Union[str, io.BufferedReader]) -> Iterable[Tuple]:
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in extract(self, path_or_paths)
776
777 def extract(self, path_or_paths):
--> 778 urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)
779 return urlpaths
780
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc)
312 num_proc = 1
313 if num_proc <= 1 or len(iterable) <= num_proc:
--> 314 mapped = [
315 _single_map_nested((function, obj, types, None, True, None))
316 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0)
313 if num_proc <= 1 or len(iterable) <= num_proc:
314 mapped = [
--> 315 _single_map_nested((function, obj, types, None, True, None))
316 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
317 ]
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args)
267 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
268 else:
--> 269 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
270 if isinstance(data_struct, list):
271 return mapped
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0)
267 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
268 else:
--> 269 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
270 if isinstance(data_struct, list):
271 return mapped
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args)
249 # Singleton first to spare some computation
250 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 251 return function(data_struct)
252
253 # Reduce logging to keep things readable in multiprocessing with tqdm
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _extract(self, urlpath)
781 def _extract(self, urlpath: str) -> str:
782 urlpath = str(urlpath)
--> 783 protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)
784 if protocol is None:
785 # no extraction
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol(urlpath, use_auth_token)
371 urlpath, kwargs = urlpath, {}
372 with fsspec.open(urlpath, **kwargs) as f:
--> 373 return _get_extraction_protocol_with_magic_number(f)
374
375
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol_with_magic_number(f)
335 def _get_extraction_protocol_with_magic_number(f) -> Optional[str]:
336 """read the magic number from a file-like object and return the compression protocol"""
--> 337 prev_loc = f.loc
338 magic_number = f.read(MAGIC_NUMBER_MAX_LENGTH)
339 f.seek(prev_loc)
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/fsspec/implementations/local.py in __getattr__(self, item)
337
338 def __getattr__(self, item):
--> 339 return getattr(self.f, item)
340
341 def __enter__(self):
AttributeError: '_io.BufferedReader' object has no attribute 'loc'
```
## Environment info
- `datasets` version: 2.1.0
- Platform: Linux-5.4.0-1071-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
- `fsspec` version: 2021.08.1
- `s3fs` version: 2021.08.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4310/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4309 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4309/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4309/comments | https://api.github.com/repos/huggingface/datasets/issues/4309/events | https://github.com/huggingface/datasets/pull/4309 | 1,231,232,935 | PR_kwDODunzps43lKpm | 4,309 | [WIP] Add TEDLIUM dataset | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech",
"name": "speech",
"color": "d93f0b",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('./datasets/tedlium', 'release1', cache_dir='/home/sanchitgandhi/cache')\r\n```\r\n\r\n```\r\nDownloading and preparing dataset tedlium/release1 to /home/sanchitgandhi/cache/tedlium/release1/1.0.1/5a9fcb97b4b52d5a1c9dc7bde4b1d5994cd89c4a3425ea36c789bf6096fee4f0...\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/sanchit_huggingface_co/datasets/src/datasets/load.py\", line 1703, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/sanchit_huggingface_co/datasets/src/datasets/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/sanchit_huggingface_co/datasets/src/datasets/builder.py\", line 1240, in _download_and_prepare\r\n raise MissingBeamOptions(\r\ndatasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/\r\nIf you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). \r\nExample of usage: \r\n `load_dataset('tedlium', 'release1', beam_runner='DirectRunner')`\r\n```\r\nSpecifying the `beam_runner='DirectRunner'` works:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('./datasets/tedlium', 'release1', cache_dir='/home/sanchitgandhi/cache', beam_runner='DirectRunner')\r\n```",
"Extra Python imports/Linux packages:\r\n```\r\npip install pydub\r\nsudo apt install ffmpeg\r\n```",
"Script heavily inspired by the TF datasets script at: https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/audio/tedlium.py\r\n\r\nThe TF datasets script uses the module AudioSegment from the package `pydub` (https://github.com/jiaaro/pydub), which is used to to open the audio files (stored in .sph format):\r\nhttps://github.com/huggingface/datasets/blob/61bf6123634bf6e7c7287cd6097909eb26118c58/datasets/tedlium/tedlium.py#L167-L170\r\nThis package requires the pip install of `pydub` and the system installation of `ffmpeg`: https://github.com/jiaaro/pydub#installation\r\nIs it ok to use these packages? Or do we tend to avoid introducing additional dependencies?\r\n\r\nThe TF datasets script also uses `_build_pcollection`:\r\nhttps://github.com/huggingface/datasets/blob/8afbbb6fe66b40d05574e2e72e65e974c72ae769/datasets/tedlium/tedlium.py#L200-L206\r\nHowever, I was advised against using `beam` logic. Thus, I have reverted to generating the examples file-by-file: https://github.com/huggingface/datasets/blob/61bf6123634bf6e7c7287cd6097909eb26118c58/datasets/tedlium/tedlium.py#L112-L138\r\n\r\nI am now able to generate examples by running the `load_dataset` command:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('./datasets/tedlium', 'release1', cache_dir='/home/sanchitgandhi/cache')\r\n```\r\n\r\nHere, generating examples is **extremely** slow: it takes ~1 second per example, so ~60k seconds for the train set (~16 hours). Is there a way of paralleling this to make it faster?",
"> This package requires the pip install of pydub and the system installation of ffmpeg: https://github.com/jiaaro/pydub#installation\r\nIs it ok to use these packages? Or do we tend to avoid introducing additional dependencies?\r\n\r\nIt's ok, windows users will have have a bad time but I'm not sure we can do much about it.\r\n\r\n> Here, generating examples is extremely slow: it takes ~1 second per example, so ~60k seconds for the train set (~16 hours). Is there a way of paralleling this to make it faster?\r\n\r\nNot at the moment. For such cases we advise hosting the dataset ourselves in a processed format. The license doesn't allow this since the license is \"NoDerivatives\". Currently the only way to parallelize it is by keeping is as a beam dataset and let users pay Google Dataflow to process it (or use spark or whatever).",
"Thanks for your super speedy reply @lhoestq!\r\n\r\nI’ve uploaded the script and README.md to the org here: https://huggingface.co/datasets/LIUM/tedlium\r\nIs any modification of the script required to be able to use it from the Hub? When I run:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ntedlium = load_dataset(\"LIUM/tedlium\", \"release1\") # for Release 1\r\n```\r\nI get the following error:\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\nInput In [2], in <cell line: 1>()\r\n----> 1 load_dataset(\"LIUM/tedlium\", \"release1\")\r\n\r\nFile ~/datasets/src/datasets/load.py:1676, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1673 ignore_verifications = ignore_verifications or save_infos\r\n 1675 # Create a dataset builder\r\n-> 1676 builder_instance = load_dataset_builder(\r\n 1677 path=path,\r\n 1678 name=name,\r\n 1679 data_dir=data_dir,\r\n 1680 data_files=data_files,\r\n 1681 cache_dir=cache_dir,\r\n 1682 features=features,\r\n 1683 download_config=download_config,\r\n 1684 download_mode=download_mode,\r\n 1685 revision=revision,\r\n 1686 use_auth_token=use_auth_token,\r\n 1687 **config_kwargs,\r\n 1688 )\r\n 1690 # Return iterable dataset in case of streaming\r\n 1691 if streaming:\r\n\r\nFile ~/datasets/src/datasets/load.py:1502, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)\r\n 1500 download_config = download_config.copy() if download_config else DownloadConfig()\r\n 1501 download_config.use_auth_token = use_auth_token\r\n-> 1502 dataset_module = dataset_module_factory(\r\n 1503 path,\r\n 1504 revision=revision,\r\n 1505 download_config=download_config,\r\n 1506 download_mode=download_mode,\r\n 1507 data_dir=data_dir,\r\n 1508 data_files=data_files,\r\n 1509 )\r\n 1511 # Get dataset builder class from the processing script\r\n 1512 builder_cls = import_main_class(dataset_module.module_path)\r\n\r\nFile ~/datasets/src/datasets/load.py:1254, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)\r\n 1249 if isinstance(e1, FileNotFoundError):\r\n 1250 raise FileNotFoundError(\r\n 1251 f\"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. \"\r\n 1252 f\"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}\"\r\n 1253 ) from None\r\n-> 1254 raise e1 from None\r\n 1255 else:\r\n 1256 raise FileNotFoundError(\r\n 1257 f\"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory.\"\r\n 1258 )\r\n\r\nFile ~/datasets/src/datasets/load.py:1227, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)\r\n 1225 raise e\r\n 1226 if filename in [sibling.rfilename for sibling in dataset_info.siblings]:\r\n-> 1227 return HubDatasetModuleFactoryWithScript(\r\n 1228 path,\r\n 1229 revision=revision,\r\n 1230 download_config=download_config,\r\n 1231 download_mode=download_mode,\r\n 1232 dynamic_modules_path=dynamic_modules_path,\r\n 1233 ).get_module()\r\n 1234 else:\r\n 1235 return HubDatasetModuleFactoryWithoutScript(\r\n 1236 path,\r\n 1237 revision=revision,\r\n (...)\r\n 1241 download_mode=download_mode,\r\n 1242 ).get_module()\r\n\r\nFile ~/datasets/src/datasets/load.py:940, in HubDatasetModuleFactoryWithScript.get_module(self)\r\n 938 def get_module(self) -> DatasetModule:\r\n 939 # get script and other files\r\n--> 940 local_path = self.download_loading_script()\r\n 941 dataset_infos_path = self.download_dataset_infos_file()\r\n 942 imports = get_imports(local_path)\r\n\r\nFile ~/datasets/src/datasets/load.py:918, in HubDatasetModuleFactoryWithScript.download_loading_script(self)\r\n 917 def download_loading_script(self) -> str:\r\n--> 918 file_path = hf_hub_url(path=self.name, name=self.name.split(\"/\")[1] + \".py\", revision=self.revision)\r\n 919 download_config = self.download_config.copy()\r\n 920 if download_config.download_desc is None:\r\n\r\nTypeError: hf_hub_url() got an unexpected keyword argument 'name'\r\n```\r\n\r\nNote that I am able to load the dataset from the `datasets` repo with the following lines of code:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('./datasets/tedlium', 'release1', cache_dir='/home/sanchitgandhi/cache')\r\n```",
"What version of `datasets` do you have ?\r\nUpdating `datasets` should fix the error ;)\r\n",
"> This package requires the pip install of pydub and the system installation of ffmpeg: https://github.com/jiaaro/pydub#installation\r\nIs it ok to use these packages? Or do we tend to avoid introducing additional dependencies?\r\n\r\n`soundfile`, which is a required audio dependency, should also work with `.sph` files, no?",
"> `soundfile`, which is a required audio dependency, should also work with `.sph` files, no?\r\n\r\nAwesome, thanks for the pointer @mariosasko! Switched `pydub` to `soundfile`, and having specifying the `dtype` argument in `soundfile.read` as `np.int16`, the arrays match with those from `pydub` ✅\r\n\r\nI also did some heavy optimising of the script with the processing of the `.stm` and `.sph` files - it now runs 2000x faster than before, so there probably isn't a need to upload the data to the Hub @lhoestq. The total processing time is just ~2mins now 🚀\r\n",
"TEDLIUM completed and uploaded to the HF Hub: https://huggingface.co/datasets/LIUM/tedlium",
"Awesome !"
] | 2022-05-10T14:12:47 | 2022-06-17T12:54:40 | 2022-06-17T11:44:01 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4309",
"html_url": "https://github.com/huggingface/datasets/pull/4309",
"diff_url": "https://github.com/huggingface/datasets/pull/4309.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4309.patch",
"merged_at": null
} | Adds the TED-LIUM dataset https://www.tensorflow.org/datasets/catalog/tedlium#tedliumrelease3
TODO:
- [x] Port `tedium.py` from TF datasets using `convert_dataset.sh` script
- [x] Make `load_dataset` work
- [ ] ~~Run `datasets-cli` command to generate `dataset_infos.json`~~
- [ ] ~~Create dummy data for continuous testing~~
- [ ] ~~Dummy data tests~~
- [ ] ~~Real data tests~~
- [ ] Create the metadata JSON
- [ ] Close PR and add directly to the Hub under LIUM org | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4309/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4308 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4308/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4308/comments | https://api.github.com/repos/huggingface/datasets/issues/4308/events | https://github.com/huggingface/datasets/pull/4308 | 1,231,217,783 | PR_kwDODunzps43lHdP | 4,308 | Remove unused multiprocessing args from test CLI | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-10T14:02:15 | 2022-05-11T12:58:25 | 2022-05-11T12:50:43 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4308",
"html_url": "https://github.com/huggingface/datasets/pull/4308",
"diff_url": "https://github.com/huggingface/datasets/pull/4308.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4308.patch",
"merged_at": "2022-05-11T12:50:42"
} | Multiprocessing is not used in the test CLI. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4308/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4308/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4307 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4307/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4307/comments | https://api.github.com/repos/huggingface/datasets/issues/4307/events | https://github.com/huggingface/datasets/pull/4307 | 1,231,175,639 | PR_kwDODunzps43k-Wo | 4,307 | Add packaged builder configs to the documentation | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-10T13:34:19 | 2022-05-10T14:03:50 | 2022-05-10T13:55:54 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4307",
"html_url": "https://github.com/huggingface/datasets/pull/4307",
"diff_url": "https://github.com/huggingface/datasets/pull/4307.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4307.patch",
"merged_at": "2022-05-10T13:55:54"
} | Add the packaged builders configurations to the docs reference is useful to show the list of all parameters one can use when loading data in many formats: CSV, JSON, etc. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4307/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4307/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4306 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4306/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4306/comments | https://api.github.com/repos/huggingface/datasets/issues/4306/events | https://github.com/huggingface/datasets/issues/4306 | 1,231,137,204 | I_kwDODunzps5JYam0 | 4,306 | `load_dataset` does not work with certain filename. | {
"login": "whatever60",
"id": 57242693,
"node_id": "MDQ6VXNlcjU3MjQyNjkz",
"avatar_url": "https://avatars.githubusercontent.com/u/57242693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/whatever60",
"html_url": "https://github.com/whatever60",
"followers_url": "https://api.github.com/users/whatever60/followers",
"following_url": "https://api.github.com/users/whatever60/following{/other_user}",
"gists_url": "https://api.github.com/users/whatever60/gists{/gist_id}",
"starred_url": "https://api.github.com/users/whatever60/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/whatever60/subscriptions",
"organizations_url": "https://api.github.com/users/whatever60/orgs",
"repos_url": "https://api.github.com/users/whatever60/repos",
"events_url": "https://api.github.com/users/whatever60/events{/privacy}",
"received_events_url": "https://api.github.com/users/whatever60/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Never mind. It is because of the caching of datasets..."
] | 2022-05-10T13:14:04 | 2022-05-10T18:58:36 | 2022-05-10T18:58:09 | NONE | null | null | null | ## Describe the bug
This is a weird bug that took me some time to find out.
I have a JSON dataset that I want to load with `load_dataset` like this:
```
data_files = dict(train="train.json.zip", val="val.json.zip")
dataset = load_dataset("json", data_files=data_files, field="data")
```
## Expected results
No error.
## Actual results
The val file is loaded as expected, but the train file throws JSON decoding error:
```
╭──────────────────────────── Traceback (most recent call last) ────────────────────────────╮
│ <ipython-input-74-97947e92c100>:5 in <module> │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/load.py:1687 in │
│ load_dataset │
│ │
│ 1684 │ try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES │
│ 1685 │ │
│ 1686 │ # Download and prepare data │
│ ❱ 1687 │ builder_instance.download_and_prepare( │
│ 1688 │ │ download_config=download_config, │
│ 1689 │ │ download_mode=download_mode, │
│ 1690 │ │ ignore_verifications=ignore_verifications, │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:605 in │
│ download_and_prepare │
│ │
│ 602 │ │ │ │ │ │ except ConnectionError: │
│ 603 │ │ │ │ │ │ │ logger.warning("HF google storage unreachable. Downloa │
│ 604 │ │ │ │ │ if not downloaded_from_gcs: │
│ ❱ 605 │ │ │ │ │ │ self._download_and_prepare( │
│ 606 │ │ │ │ │ │ │ dl_manager=dl_manager, verify_infos=verify_infos, **do │
│ 607 │ │ │ │ │ │ ) │
│ 608 │ │ │ │ │ # Sync info │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:694 in │
│ _download_and_prepare │
│ │
│ 691 │ │ │ │
│ 692 │ │ │ try: │
│ 693 │ │ │ │ # Prepare split will record examples associated to the split │
│ ❱ 694 │ │ │ │ self._prepare_split(split_generator, **prepare_split_kwargs) │
│ 695 │ │ │ except OSError as e: │
│ 696 │ │ │ │ raise OSError( │
│ 697 │ │ │ │ │ "Cannot find data file. " │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:1151 in │
│ _prepare_split │
│ │
│ 1148 │ │ │
│ 1149 │ │ generator = self._generate_tables(**split_generator.gen_kwargs) │
│ 1150 │ │ with ArrowWriter(features=self.info.features, path=fpath) as writer: │
│ ❱ 1151 │ │ │ for key, table in logging.tqdm( │
│ 1152 │ │ │ │ generator, unit=" tables", leave=False, disable=True # not loggin │
│ 1153 │ │ │ ): │
│ 1154 │ │ │ │ writer.write_table(table) │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/tqdm/notebook.py:257 in │
│ __iter__ │
│ │
│ 254 │ │
│ 255 │ def __iter__(self): │
│ 256 │ │ try: │
│ ❱ 257 │ │ │ for obj in super(tqdm_notebook, self).__iter__(): │
│ 258 │ │ │ │ # return super(tqdm...) will not catch exception │
│ 259 │ │ │ │ yield obj │
│ 260 │ │ # NB: except ... [ as ...] breaks IPython async KeyboardInterrupt │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/tqdm/std.py:1183 in │
│ __iter__ │
│ │
│ 1180 │ │ # If the bar is disabled, then just walk the iterable │
│ 1181 │ │ # (note: keep this check outside the loop for performance) │
│ 1182 │ │ if self.disable: │
│ ❱ 1183 │ │ │ for obj in iterable: │
│ 1184 │ │ │ │ yield obj │
│ 1185 │ │ │ return │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/packaged_modules/j │
│ son/json.py:90 in _generate_tables │
│ │
│ 87 │ │ │ # If the file is one json object and if we need to look at the list of │
│ 88 │ │ │ if self.config.field is not None: │
│ 89 │ │ │ │ with open(file, encoding="utf-8") as f: │
│ ❱ 90 │ │ │ │ │ dataset = json.load(f) │
│ 91 │ │ │ │ │
│ 92 │ │ │ │ # We keep only the field we are interested in │
│ 93 │ │ │ │ dataset = dataset[self.config.field] │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/json/__init__.py:293 in load │
│ │
│ 290 │ To use a custom ``JSONDecoder`` subclass, specify it with the ``cls`` │
│ 291 │ kwarg; otherwise ``JSONDecoder`` is used. │
│ 292 │ """ │
│ ❱ 293 │ return loads(fp.read(), │
│ 294 │ │ cls=cls, object_hook=object_hook, │
│ 295 │ │ parse_float=parse_float, parse_int=parse_int, │
│ 296 │ │ parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/json/__init__.py:357 in loads │
│ │
│ 354 │ if (cls is None and object_hook is None and │
│ 355 │ │ │ parse_int is None and parse_float is None and │
│ 356 │ │ │ parse_constant is None and object_pairs_hook is None and not kw): │
│ ❱ 357 │ │ return _default_decoder.decode(s) │
│ 358 │ if cls is None: │
│ 359 │ │ cls = JSONDecoder │
│ 360 │ if object_hook is not None: │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/json/decoder.py:337 in decode │
│ │
│ 334 │ │ containing a JSON document). │
│ 335 │ │ │
│ 336 │ │ """ │
│ ❱ 337 │ │ obj, end = self.raw_decode(s, idx=_w(s, 0).end()) │
│ 338 │ │ end = _w(s, end).end() │
│ 339 │ │ if end != len(s): │
│ 340 │ │ │ raise JSONDecodeError("Extra data", s, end) │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/json/decoder.py:353 in raw_decode │
│ │
│ 350 │ │ │
│ 351 │ │ """ │
│ 352 │ │ try: │
│ ❱ 353 │ │ │ obj, end = self.scan_once(s, idx) │
│ 354 │ │ except StopIteration as err: │
│ 355 │ │ │ raise JSONDecodeError("Expecting value", s, err.value) from None │
│ 356 │ │ return obj, end │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
JSONDecodeError: Unterminated string starting at: line 85 column 20 (char 60051)
```
However, when I rename the `train.json.zip` to other names (like `training.json.zip`, or even to `train.json`), everything works fine; when I unzip the file to `train.json`, it works as well.
## Environment info
```
- `datasets` version: 2.1.0
- Platform: Linux-4.4.0-131-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 7.0.0
- Pandas version: 1.4.2
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4306/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4306/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4305 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4305/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4305/comments | https://api.github.com/repos/huggingface/datasets/issues/4305/events | https://github.com/huggingface/datasets/pull/4305 | 1,231,099,934 | PR_kwDODunzps43kt4P | 4,305 | Fixes FrugalScore | {
"login": "moussaKam",
"id": 28675016,
"node_id": "MDQ6VXNlcjI4Njc1MDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/28675016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moussaKam",
"html_url": "https://github.com/moussaKam",
"followers_url": "https://api.github.com/users/moussaKam/followers",
"following_url": "https://api.github.com/users/moussaKam/following{/other_user}",
"gists_url": "https://api.github.com/users/moussaKam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moussaKam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moussaKam/subscriptions",
"organizations_url": "https://api.github.com/users/moussaKam/orgs",
"repos_url": "https://api.github.com/users/moussaKam/repos",
"events_url": "https://api.github.com/users/moussaKam/events{/privacy}",
"received_events_url": "https://api.github.com/users/moussaKam/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4190228726,
"node_id": "LA_kwDODunzps75wdD2",
"url": "https://api.github.com/repos/huggingface/datasets/labels/transfer-to-evaluate",
"name": "transfer-to-evaluate",
"color": "E3165C",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4305). All of your documentation changes will be reflected on that endpoint.",
"> predictions and references are swapped. Basically Frugalscore is commutative, however some tiny differences can occur if we swap the references and the predictions. I decided to swap them just to obtain the exact results as reported in the paper.\r\n\r\nWhat is the order of magnitude of the difference ? Do you know what causes this ?\r\n\r\n> I switched to dynamic padding that was was used in the training, forcing the padding to max_length introduces errors for some reason that I ignore.\r\n\r\nWhat error ?"
] | 2022-05-10T12:44:06 | 2022-09-22T16:42:06 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4305",
"html_url": "https://github.com/huggingface/datasets/pull/4305",
"diff_url": "https://github.com/huggingface/datasets/pull/4305.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4305.patch",
"merged_at": null
} | There are two minor modifications in this PR:
1) `predictions` and `references` are swapped. Basically Frugalscore is commutative, however some tiny differences can occur if we swap the references and the predictions. I decided to swap them just to obtain the exact results as reported in the paper.
2) I switched to dynamic padding that was was used in the training, forcing the padding to `max_length` introduces errors for some reason that I ignore.
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4305/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4305/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4304 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4304/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4304/comments | https://api.github.com/repos/huggingface/datasets/issues/4304/events | https://github.com/huggingface/datasets/issues/4304 | 1,231,047,051 | I_kwDODunzps5JYEmL | 4,304 | Language code search does direct matches | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Thanks for reporting ! I forwarded the issue to the front-end team :)\r\n\r\nWill keep you posted !\r\n\r\nI also changed the tagging app to suggest two letters code for now."
] | 2022-05-10T11:59:16 | 2022-05-10T12:38:42 | null | CONTRIBUTOR | null | null | null | ## Describe the bug
Hi. Searching for bcp47 tags that are just the language prefix (e.g. `sq` or `da`) excludes datasets that have added extra information in their language metadata (e.g. `sq-AL` or `da-bornholm`). The example codes given in the [tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging) encourages addition of the additional codes ("_expected format is BCP47 tags separated for ';' e.g. 'en-US;fr-FR'_") but this would lead to those datasets being hidden in datasets search.
## Steps to reproduce the bug
1. Add a dataset using a variant tag (e.g. [`sq-AL`](https://huggingface.co/datasets?languages=languages:sq-AL))
2. Look for datasets using the full code
3. Note that they're missing when just the language is searched for (e.g. [`sq`](https://huggingface.co/datasets?languages=languages:sq))
Some datasets are already affected by this - e.g. `AmazonScience/massive` is listed under `sq-AL` but not `sq`.
One workaround is for dataset creators to add an additional root language tag to dataset YAML metadata, but it's unclear how to communicate this. It might be possible to index the search on `languagecode.split('-')[0]` but I wanted to float this issue before trying to write any code :)
## Expected results
Datasets using longer bcp47 tags also appear under searches for just the language code; e.g. Quebecois datasets (`fr-CA`) would come up when looking for French datasets with no region specification (`fr`), or US English (`en-US`) datasets would come up when searching for English datasets (`en`).
## Actual results
The language codes seem to be directly string matched, excluding datasets with specific language tags from non-specific searches.
## Environment info
(web app) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4304/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4304/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4303 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4303/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4303/comments | https://api.github.com/repos/huggingface/datasets/issues/4303/events | https://github.com/huggingface/datasets/pull/4303 | 1,230,867,728 | PR_kwDODunzps43j8cH | 4,303 | Fix: Add missing comma | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The CI failure is unrelated to this PR and fixed on master, merging :)"
] | 2022-05-10T09:21:38 | 2022-05-11T08:50:15 | 2022-05-11T08:50:14 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4303",
"html_url": "https://github.com/huggingface/datasets/pull/4303",
"diff_url": "https://github.com/huggingface/datasets/pull/4303.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4303.patch",
"merged_at": "2022-05-11T08:50:14"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4303/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4303/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4302 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4302/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4302/comments | https://api.github.com/repos/huggingface/datasets/issues/4302/events | https://github.com/huggingface/datasets/pull/4302 | 1,230,651,117 | PR_kwDODunzps43jPE5 | 4,302 | Remove hacking license tags when mirroring datasets on the Hub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The Hub doesn't allow these characters in the YAML tags, and git push fails if you want to push a dataset card containing these characters.",
"Ok, let me rename the bad config names :) I think I can also keep backward compatibility with a warning",
"Almost done with it btw, will submit a PR that shows all the configuration name changes (from a bit more than 20 datasets)",
"Please, let me know when the renaming of configs is done. If not enough bandwidth, I can take care of it...",
"Will focus on this this afternoon ;)",
"I realized when renaming all the configurations with dots in https://github.com/huggingface/datasets/pull/4365 that it's not ideal for certain cases. For example:\r\n- many configurations have a version like \"1.0.0\" in their names\r\n- to avoid breaking changes we need to replace dots with underscores in the user input and show a warning, which hurts the experience\r\n- our second most downloaded dataset at the moment is affected: `newsgroup`\r\n- if we disallow dots, then we'll never be able to make the [allenai/c4](https://huggingface.co/datasets/allenai/c4) work with its different configurations since they contain dots, and we can't rename them because they are the official download links\r\n\r\nI was thinking of other alternatives:\r\n1. just stop separating tags per config name completely, and have a single flat YAML for all configurations. Dataset search doesn't use this info anyway\r\n2. use another YAML structure to avoid having config names as keys, such as\r\n```yaml\r\nlanguages:\r\n- config: 20220301_en\r\n values:\r\n - en\r\n```\r\n\r\nI'm down for 1, to keep things simple",
"@lhoestq I agree:\r\n- better not changing config names (so that we do not introduce any braking change)\r\n- therefore, we should not use them as keys\r\n\r\nIn relation with the proposed solutions, I have no strong opinion:\r\n- option 1 is simpler and aligns better with current usage on the Hub (configs are ignored)\r\n- however:\r\n - we will lose all the information per config we already have (for those datasets containing config keys; contributors made an effort to put that information per config)\r\n - and this information might be useful on the Hub in the future, in case we would like to enrich the search feature with more granularity; this is only applicable if this feature could eventually make sense\r\n\r\nSo, no strong opinion...",
"Closing in favor of https://github.com/huggingface/datasets/pull/4367"
] | 2022-05-10T05:52:46 | 2022-05-20T09:48:30 | 2022-05-20T09:40:20 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4302",
"html_url": "https://github.com/huggingface/datasets/pull/4302",
"diff_url": "https://github.com/huggingface/datasets/pull/4302.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4302.patch",
"merged_at": null
} | Currently, when mirroring datasets on the Hub, the license tags are hacked: removed of characters "." and "$". On the contrary, this hacking is not applied to community datasets on the Hub. This generates multiple variants of the same tag on the Hub.
I guess this hacking is no longer necessary:
- it is not applied to community datasets
- all canonical datasets are validated by maintainers before being merged: CI + maintainers make sure license tags are the right ones
Fix #4298. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4302/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4302/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4301 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4301/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4301/comments | https://api.github.com/repos/huggingface/datasets/issues/4301/events | https://github.com/huggingface/datasets/pull/4301 | 1,230,401,256 | PR_kwDODunzps43idlE | 4,301 | Add ImageNet-Sketch dataset | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I think you can go ahead with uploading the data, and also ping the author in parallel. I think the images may subject to copyright anyway (scrapped from google image) so the dataset author is not allowed to set a license to the data.\r\n\r\nI think it's fine to upload the dataset as soon as we mention explicitly that the images may be subject to copyright."
] | 2022-05-09T23:38:45 | 2022-05-23T18:14:14 | 2022-05-23T18:05:29 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4301",
"html_url": "https://github.com/huggingface/datasets/pull/4301",
"diff_url": "https://github.com/huggingface/datasets/pull/4301.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4301.patch",
"merged_at": "2022-05-23T18:05:29"
} | This PR adds the ImageNet-Sketch dataset and resolves #3953 . | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4301/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4300 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4300/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4300/comments | https://api.github.com/repos/huggingface/datasets/issues/4300/events | https://github.com/huggingface/datasets/pull/4300 | 1,230,272,761 | PR_kwDODunzps43iA86 | 4,300 | Add API code examples for loading methods | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-09T21:30:26 | 2022-05-25T16:23:15 | 2022-05-25T09:20:13 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4300",
"html_url": "https://github.com/huggingface/datasets/pull/4300",
"diff_url": "https://github.com/huggingface/datasets/pull/4300.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4300.patch",
"merged_at": "2022-05-25T09:20:12"
} | This PR adds API code examples for loading methods, let me know if I've missed any important parameters we should showcase :)
I was a bit confused about `inspect_dataset` and `inspect_metric`. The `path` parameter says it will accept a dataset identifier from the Hub. But when I try the identifier `rotten_tomatoes`, it gives me:
```py
from datasets import inspect_dataset
inspect_dataset('rotten_tomatoes', local_path='/content/rotten_tomatoes')
FileNotFoundError: Couldn't find a dataset script at /content/rotten_tomatoes/rotten_tomatoes.py or any data file in the same directory.
```
Does the user need to have an existing copy of `rotten_tomatoes.py` on their local drive (in which case, it seems like the same option as the first option in `path`)? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4300/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4300/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4299 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4299/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4299/comments | https://api.github.com/repos/huggingface/datasets/issues/4299/events | https://github.com/huggingface/datasets/pull/4299 | 1,230,236,782 | PR_kwDODunzps43h5RP | 4,299 | Remove manual download from imagenet-1k | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the reviews @apsdehal and @lhoestq! As suggested by @lhoestq, I'll separate the train/val/test splits, apply the validation split fixes and shuffle the images files to simplify the script and make streaming faster.",
"@apsdehal I dismissed your review as it's no longer relevant after the data files changes suggested by @lhoestq. "
] | 2022-05-09T20:49:18 | 2022-05-25T14:54:59 | 2022-05-25T14:46:16 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4299",
"html_url": "https://github.com/huggingface/datasets/pull/4299",
"diff_url": "https://github.com/huggingface/datasets/pull/4299.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4299.patch",
"merged_at": "2022-05-25T14:46:16"
} | Remove the manual download code from `imagenet-1k` to make it a regular dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4299/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4299/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4298 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4298/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4298/comments | https://api.github.com/repos/huggingface/datasets/issues/4298/events | https://github.com/huggingface/datasets/issues/4298 | 1,229,748,006 | I_kwDODunzps5JTHcm | 4,298 | Normalise license names | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"we'll add the same server-side metadata validation system as for hf.co/models soon-ish\r\n\r\n(you can check on hf.co/models that licenses are \"clean\")",
"Fixed by #4367."
] | 2022-05-09T13:51:32 | 2022-05-20T09:51:50 | 2022-05-20T09:51:50 | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
When browsing datasets, the Licenses tag cloud (bottom left of e.g. https://huggingface.co/datasets) has multiple variants of the same license. This means the options exclude datasets arbitrarily, giving users artificially low recall. The cause of the dupes is probably due to a bit of variation in metadata.
**Describe the solution you'd like**
I'd like the licenses in metadata to follow the same standard as much as possible, to remove this problem. I'd like to go ahead and normalise the dataset metadata to follow the format & values given in [src/datasets/utils/resources/licenses.json](https://github.com/huggingface/datasets/blob/master/src/datasets/utils/resources/licenses.json) .
**Describe alternatives you've considered**
None
**Additional context**
None
**Priority**
Low
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4298/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4298/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4297 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4297/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4297/comments | https://api.github.com/repos/huggingface/datasets/issues/4297/events | https://github.com/huggingface/datasets/issues/4297 | 1,229,735,498 | I_kwDODunzps5JTEZK | 4,297 | Datasets YAML tagging space is down | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@lhoestq @albertvillanova `update-task-list` branch does not exist anymore, should point to `main` now i guess",
"Thanks for reporting, fixing it now",
"It's up again :)"
] | 2022-05-09T13:45:05 | 2022-05-09T14:44:25 | 2022-05-09T14:44:25 | CONTRIBUTOR | null | null | null | ## Describe the bug
The neat hf spaces app for generating YAML tags for dataset `README.md`s is down
## Steps to reproduce the bug
1. Visit https://huggingface.co/spaces/huggingface/datasets-tagging
## Expected results
There'll be a HF spaces web app for generating dataset metadata YAML
## Actual results
There's an error message; here's the step where it breaks:
```
Step 18/29 : RUN pip install -r requirements.txt
---> Running in e88bfe7e7e0c
Defaulting to user installation because normal site-packages is not writeable
Collecting git+https://github.com/huggingface/datasets.git@update-task-list (from -r requirements.txt (line 4))
Cloning https://github.com/huggingface/datasets.git (to revision update-task-list) to /tmp/pip-req-build-bm8t0r0k
Running command git clone --filter=blob:none --quiet https://github.com/huggingface/datasets.git /tmp/pip-req-build-bm8t0r0k
WARNING: Did not find branch or tag 'update-task-list', assuming revision or ref.
Running command git checkout -q update-task-list
error: pathspec 'update-task-list' did not match any file(s) known to git
error: subprocess-exited-with-error
× git checkout -q update-task-list did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× git checkout -q update-task-list did not run successfully.
│ exit code: 1
╰─> See above for output.
```
## Environment info
- Platform: Linux / Brave
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4297/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4297/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4296 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4296/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4296/comments | https://api.github.com/repos/huggingface/datasets/issues/4296/events | https://github.com/huggingface/datasets/pull/4296 | 1,229,554,645 | PR_kwDODunzps43foZ- | 4,296 | Fix URL query parameters in compression hop path when streaming | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4296). All of your documentation changes will be reflected on that endpoint."
] | 2022-05-09T11:18:22 | 2022-07-06T15:19:53 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4296",
"html_url": "https://github.com/huggingface/datasets/pull/4296",
"diff_url": "https://github.com/huggingface/datasets/pull/4296.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4296.patch",
"merged_at": null
} | Fix #3488. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4296/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4296/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4295 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4295/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4295/comments | https://api.github.com/repos/huggingface/datasets/issues/4295/events | https://github.com/huggingface/datasets/pull/4295 | 1,229,527,283 | PR_kwDODunzps43fieR | 4,295 | Fix missing lz4 dependency for tests | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-09T10:53:20 | 2022-05-09T11:21:22 | 2022-05-09T11:13:44 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4295",
"html_url": "https://github.com/huggingface/datasets/pull/4295",
"diff_url": "https://github.com/huggingface/datasets/pull/4295.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4295.patch",
"merged_at": "2022-05-09T11:13:44"
} | Currently, `lz4` is not defined as a dependency for tests. Therefore, all tests marked with `@require_lz4` are skipped. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4295/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4295/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4294 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4294/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4294/comments | https://api.github.com/repos/huggingface/datasets/issues/4294/events | https://github.com/huggingface/datasets/pull/4294 | 1,229,455,582 | PR_kwDODunzps43fTXA | 4,294 | Fix CLI run_beam save_infos | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-09T09:47:43 | 2022-05-10T07:04:04 | 2022-05-10T06:56:10 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4294",
"html_url": "https://github.com/huggingface/datasets/pull/4294",
"diff_url": "https://github.com/huggingface/datasets/pull/4294.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4294.patch",
"merged_at": "2022-05-10T06:56:10"
} | Currently, it raises TypeError:
```
TypeError: _download_and_prepare() got an unexpected keyword argument 'save_infos'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4294/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4293 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4293/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4293/comments | https://api.github.com/repos/huggingface/datasets/issues/4293/events | https://github.com/huggingface/datasets/pull/4293 | 1,228,815,477 | PR_kwDODunzps43dRt9 | 4,293 | Fix wrong map parameter name in cache docs | {
"login": "h4iku",
"id": 3812788,
"node_id": "MDQ6VXNlcjM4MTI3ODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3812788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h4iku",
"html_url": "https://github.com/h4iku",
"followers_url": "https://api.github.com/users/h4iku/followers",
"following_url": "https://api.github.com/users/h4iku/following{/other_user}",
"gists_url": "https://api.github.com/users/h4iku/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h4iku/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h4iku/subscriptions",
"organizations_url": "https://api.github.com/users/h4iku/orgs",
"repos_url": "https://api.github.com/users/h4iku/repos",
"events_url": "https://api.github.com/users/h4iku/events{/privacy}",
"received_events_url": "https://api.github.com/users/h4iku/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-08T07:27:46 | 2022-06-14T16:49:00 | 2022-06-14T16:07:00 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4293",
"html_url": "https://github.com/huggingface/datasets/pull/4293",
"diff_url": "https://github.com/huggingface/datasets/pull/4293.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4293.patch",
"merged_at": "2022-06-14T16:07:00"
} | The `load_from_cache` parameter of `map` should be `load_from_cache_file`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4293/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4293/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4292 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4292/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4292/comments | https://api.github.com/repos/huggingface/datasets/issues/4292/events | https://github.com/huggingface/datasets/pull/4292 | 1,228,216,788 | PR_kwDODunzps43bhrp | 4,292 | Add API code examples for remaining main classes | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-06T18:15:31 | 2022-05-25T18:05:13 | 2022-05-25T17:56:36 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4292",
"html_url": "https://github.com/huggingface/datasets/pull/4292",
"diff_url": "https://github.com/huggingface/datasets/pull/4292.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4292.patch",
"merged_at": "2022-05-25T17:56:36"
} | This PR adds API code examples for the remaining functions in the Main classes. I wasn't too familiar with some of the functions (`decode_batch`, `decode_column`, `decode_example`, etc.) so please feel free to add an example of usage and I can fill in the rest :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4292/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4292/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4291 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4291/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4291/comments | https://api.github.com/repos/huggingface/datasets/issues/4291/events | https://github.com/huggingface/datasets/issues/4291 | 1,227,777,500 | I_kwDODunzps5JLmXc | 4,291 | Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @leondz, thanks for reporting.\r\n\r\nIndeed, the dataset viewer relies on the dataset being streamable (passing `streaming=True` to `load_dataset`). Whereas most of the datastes are streamable out of the box (thanks to our implementation of streaming), there are still some exceptions.\r\n\r\nIn particular, in your case, that is due to the data file being TAR. This format is not streamable out of the box (it does not allow random access to the archived files), but we use a trick to allow streaming: using `dl_manager.iter_archive`.\r\n\r\nLet me know if you need some help: I could push a commit to your repo with the fix.",
"Ah, right! The preview is working now, but this explanation is good to know, thank you. I'll prefer formats with random file access supported in datasets.utils.extract in future, and try out this fix for the tarfiles :)"
] | 2022-05-06T12:03:27 | 2022-05-09T08:25:58 | 2022-05-09T08:25:58 | CONTRIBUTOR | null | null | null | ### Link
https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train
### Description
The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. What did I miss?
### Owner
Yes | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4291/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4291/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4290 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4290/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4290/comments | https://api.github.com/repos/huggingface/datasets/issues/4290/events | https://github.com/huggingface/datasets/pull/4290 | 1,227,592,826 | PR_kwDODunzps43Zr08 | 4,290 | Update paper link in medmcqa dataset card | {
"login": "monk1337",
"id": 17107749,
"node_id": "MDQ6VXNlcjE3MTA3NzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17107749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monk1337",
"html_url": "https://github.com/monk1337",
"followers_url": "https://api.github.com/users/monk1337/followers",
"following_url": "https://api.github.com/users/monk1337/following{/other_user}",
"gists_url": "https://api.github.com/users/monk1337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monk1337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monk1337/subscriptions",
"organizations_url": "https://api.github.com/users/monk1337/orgs",
"repos_url": "https://api.github.com/users/monk1337/repos",
"events_url": "https://api.github.com/users/monk1337/events{/privacy}",
"received_events_url": "https://api.github.com/users/monk1337/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@albertvillanova Kindly check :)"
] | 2022-05-06T08:52:51 | 2022-09-30T11:51:28 | 2022-09-30T11:49:07 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4290",
"html_url": "https://github.com/huggingface/datasets/pull/4290",
"diff_url": "https://github.com/huggingface/datasets/pull/4290.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4290.patch",
"merged_at": "2022-09-30T11:49:07"
} | Updating readme in medmcqa dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4290/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4290/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4288 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4288/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4288/comments | https://api.github.com/repos/huggingface/datasets/issues/4288/events | https://github.com/huggingface/datasets/pull/4288 | 1,226,821,732 | PR_kwDODunzps43XLKi | 4,288 | Add missing `faiss` import to fix https://github.com/huggingface/datasets/issues/4287 | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-05-05T15:21:49 | 2022-05-10T12:55:06 | 2022-05-10T12:09:48 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4288",
"html_url": "https://github.com/huggingface/datasets/pull/4288",
"diff_url": "https://github.com/huggingface/datasets/pull/4288.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4288.patch",
"merged_at": "2022-05-10T12:09:48"
} | This PR fixes the issue recently mentioned in https://github.com/huggingface/datasets/issues/4287 🤗 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4288/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4288/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4287 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4287/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4287/comments | https://api.github.com/repos/huggingface/datasets/issues/4287/events | https://github.com/huggingface/datasets/issues/4287 | 1,226,806,652 | I_kwDODunzps5JH5V8 | 4,287 | "NameError: name 'faiss' is not defined" on `.add_faiss_index` when `device` is not None | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"So I managed to solve this by adding a missing `import faiss` in the `@staticmethod` defined in https://github.com/huggingface/datasets/blob/f51b6994db27ea69261ef919fb7775928f9ec10b/src/datasets/search.py#L305, triggered from https://github.com/huggingface/datasets/blob/f51b6994db27ea69261ef919fb7775928f9ec10b/src/datasets/search.py#L249 when trying to `ds_with_embeddings.add_faiss_index(column='embeddings', device=0)` with the code above.\r\n\r\nAs it seems that the `@staticmethod` doesn't recognize the `import faiss` defined in https://github.com/huggingface/datasets/blob/f51b6994db27ea69261ef919fb7775928f9ec10b/src/datasets/search.py#L261, so whenever the value of `device` is not None in https://github.com/huggingface/datasets/blob/71f76e0bdeaddadedc4f9c8d15cfff5a36d62f66/src/datasets/search.py#L438, that exception is triggered.\r\n\r\nSo on, adding `import faiss` inside https://github.com/huggingface/datasets/blob/71f76e0bdeaddadedc4f9c8d15cfff5a36d62f66/src/datasets/search.py#L305 right after the check of `device`'s value, solves the issue and lets you calculate the indices in GPU.\r\n\r\nI'll add the code in a PR linked to this issue in case you want to merge it!",
"Adding here the complete error traceback!\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/alvarobartt/lol.py\", line 12, in <module>\r\n ds_with_embeddings.add_faiss_index(column='embeddings', device=0) # default `device=None`\r\n File \"/home/alvarobartt/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 3656, in add_faiss_index\r\n super().add_faiss_index(\r\n File \"/home/alvarobartt/.local/lib/python3.9/site-packages/datasets/search.py\", line 478, in add_faiss_index\r\n faiss_index.add_vectors(self, column=column, train_size=train_size, faiss_verbose=True)\r\n File \"/home/alvarobartt/.local/lib/python3.9/site-packages/datasets/search.py\", line 281, in add_vectors\r\n self.faiss_index = self._faiss_index_to_device(index, self.device)\r\n File \"/home/alvarobartt/.local/lib/python3.9/site-packages/datasets/search.py\", line 327, in _faiss_index_to_device\r\n faiss_res = faiss.StandardGpuResources()\r\nNameError: name 'faiss' is not defined\r\n```",
"Closed as https://github.com/huggingface/datasets/pull/4288 already merged! :hugs:"
] | 2022-05-05T15:09:45 | 2022-05-10T13:53:19 | 2022-05-10T13:53:19 | CONTRIBUTOR | null | null | null | ## Describe the bug
When using `datasets` to calculate the FAISS indices of a dataset, the exception `NameError: name 'faiss' is not defined` is triggered when trying to calculate those on a device (GPU), so `.add_faiss_index(..., device=0)` fails with that exception.
All that assuming that `datasets` is properly installed and `faiss-gpu` too, as well as all the CUDA drivers required.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from transformers import DPRContextEncoder, DPRContextEncoderTokenizer
import torch
torch.set_grad_enabled(False)
ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base")
ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base")
from datasets import load_dataset
ds = load_dataset('crime_and_punish', split='train[:100]')
ds_with_embeddings = ds.map(lambda example: {'embeddings': ctx_encoder(**ctx_tokenizer(example["line"], return_tensors="pt"))[0][0].numpy()})
ds_with_embeddings.add_faiss_index(column='embeddings', device=0) # default `device=None`
```
## Expected results
A new column named `embeddings` in the dataset that we're adding the index to.
## Actual results
An exception is triggered with the following message `NameError: name 'faiss' is not defined`.
## Environment info
- `datasets` version: 2.1.0
- Platform: Linux-5.13.0-1022-azure-x86_64-with-glibc2.31
- Python version: 3.9.12
- PyArrow version: 7.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4287/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4287/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4286 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4286/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4286/comments | https://api.github.com/repos/huggingface/datasets/issues/4286/events | https://github.com/huggingface/datasets/pull/4286 | 1,226,758,621 | PR_kwDODunzps43W-DI | 4,286 | Add Lahnda language tag | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-05T14:34:20 | 2022-05-10T12:10:04 | 2022-05-10T12:02:38 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4286",
"html_url": "https://github.com/huggingface/datasets/pull/4286",
"diff_url": "https://github.com/huggingface/datasets/pull/4286.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4286.patch",
"merged_at": "2022-05-10T12:02:37"
} | This language is present in [Wikimedia's WIT](https://huggingface.co/datasets/wikimedia/wit_base) dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4286/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4285 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4285/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4285/comments | https://api.github.com/repos/huggingface/datasets/issues/4285/events | https://github.com/huggingface/datasets/pull/4285 | 1,226,374,831 | PR_kwDODunzps43VtEa | 4,285 | Update LexGLUE README.md | {
"login": "iliaschalkidis",
"id": 1626984,
"node_id": "MDQ6VXNlcjE2MjY5ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iliaschalkidis",
"html_url": "https://github.com/iliaschalkidis",
"followers_url": "https://api.github.com/users/iliaschalkidis/followers",
"following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}",
"gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions",
"organizations_url": "https://api.github.com/users/iliaschalkidis/orgs",
"repos_url": "https://api.github.com/users/iliaschalkidis/repos",
"events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}",
"received_events_url": "https://api.github.com/users/iliaschalkidis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-05T08:36:50 | 2022-05-05T13:39:04 | 2022-05-05T13:33:35 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4285",
"html_url": "https://github.com/huggingface/datasets/pull/4285",
"diff_url": "https://github.com/huggingface/datasets/pull/4285.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4285.patch",
"merged_at": "2022-05-05T13:33:35"
} | Update the leaderboard based on the latest results presented in the ACL 2022 version of the article. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4285/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4285/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4284 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4284/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4284/comments | https://api.github.com/repos/huggingface/datasets/issues/4284/events | https://github.com/huggingface/datasets/issues/4284 | 1,226,200,727 | I_kwDODunzps5JFlaX | 4,284 | Issues in processing very large datasets | {
"login": "sajastu",
"id": 10419055,
"node_id": "MDQ6VXNlcjEwNDE5MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/10419055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sajastu",
"html_url": "https://github.com/sajastu",
"followers_url": "https://api.github.com/users/sajastu/followers",
"following_url": "https://api.github.com/users/sajastu/following{/other_user}",
"gists_url": "https://api.github.com/users/sajastu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sajastu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sajastu/subscriptions",
"organizations_url": "https://api.github.com/users/sajastu/orgs",
"repos_url": "https://api.github.com/users/sajastu/repos",
"events_url": "https://api.github.com/users/sajastu/events{/privacy}",
"received_events_url": "https://api.github.com/users/sajastu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! `datasets` doesn't load the dataset in memory. Instead it uses memory mapping to load your dataset from your disk (it is stored as arrow files). Do you know at what point you have RAM issues exactly ?\r\n\r\nHow big are your graph_data_train dictionaries btw ?"
] | 2022-05-05T05:01:09 | 2022-05-10T12:15:23 | null | NONE | null | null | null | ## Describe the bug
I'm trying to add a feature called "subgraph" to CNN/DM dataset (modifications on run_summarization.py of Huggingface Transformers script) --- I'm not quite sure if I'm doing it the right way, though--- but the main problem appears when the training starts where the error ` [OSError: [Errno 12] Cannot allocate memory]` appears. I suppose this problem roots in RAM issues and how the dataset is loaded during training, but I have no clue of what I can do to fix it. Observing the dataset's cache directory, I see that it takes ~600GB of memory and that's why I believe special care is needed when loading it into the memory.
Here are my modifications to `run_summarization.py` code.
```
# loading pre-computed dictionary where keys are 'id' of article and values are corresponding subgraph
graph_data_train = get_graph_data('train')
graph_data_validation = get_graph_data('val')
...
...
with training_args.main_process_first(desc="train dataset map pre-processing"):
train_dataset = train_dataset.map(
preprocess_function_train,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not data_args.overwrite_cache,
desc="Running tokenizer on train dataset",
)
```
And here is the modified preprocessed function:
```
def preprocess_function_train(examples):
inputs, targets, sub_graphs, ids = [], [], [], []
for i in range(len(examples[text_column])):
if examples[text_column][i] is not None and examples[summary_column][i] is not None:
# if examples['doc_id'][i] in graph_data.keys():
inputs.append(examples[text_column][i])
targets.append(examples[summary_column][i])
sub_graphs.append(graph_data_train[examples['id'][i]])
ids.append(examples['id'][i])
inputs = [prefix + inp for inp in inputs]
model_inputs = tokenizer(inputs, max_length=data_args.max_source_length, padding=padding, truncation=True,
sub_graphs=sub_graphs, ids=ids)
# Setup the tokenizer for targets
with tokenizer.as_target_tokenizer():
labels = tokenizer(targets, max_length=max_target_length, padding=padding, truncation=True)
# If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore
# padding in the loss.
if padding == "max_length" and data_args.ignore_pad_token_for_loss:
labels["input_ids"] = [
[(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels["input_ids"]
]
model_inputs["labels"] = labels["input_ids"]
return model_inputs
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.1.0
- Platform: Linux Ubuntu
- Python version: 3.6
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4284/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4283 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4283/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4283/comments | https://api.github.com/repos/huggingface/datasets/issues/4283/events | https://github.com/huggingface/datasets/pull/4283 | 1,225,686,988 | PR_kwDODunzps43Tnxo | 4,283 | Fix filesystem docstring | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-04T17:42:42 | 2022-05-06T16:32:02 | 2022-05-06T06:22:17 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4283",
"html_url": "https://github.com/huggingface/datasets/pull/4283",
"diff_url": "https://github.com/huggingface/datasets/pull/4283.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4283.patch",
"merged_at": "2022-05-06T06:22:17"
} | This PR untangles the `S3FileSystem` docstring so the [parameters](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes#parameters) are properly displayed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4283/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4283/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4282 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4282/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4282/comments | https://api.github.com/repos/huggingface/datasets/issues/4282/events | https://github.com/huggingface/datasets/pull/4282 | 1,225,616,545 | PR_kwDODunzps43TZYL | 4,282 | Don't do unnecessary list type casting to avoid replacing None values by empty lists | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Quick question about the message in the warning. You say \"will be fixed in a future major version\" but don't you mean \"will raise an error in a future major version\"?",
"Right ! Good catch, thanks, I updated the message to say \"will raise an error in a future major version\""
] | 2022-05-04T16:37:01 | 2022-05-06T10:43:58 | 2022-05-06T10:37:00 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4282",
"html_url": "https://github.com/huggingface/datasets/pull/4282",
"diff_url": "https://github.com/huggingface/datasets/pull/4282.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4282.patch",
"merged_at": "2022-05-06T10:37:00"
} | In certain cases, `None` values are replaced by empty lists when casting feature types.
It happens every time you cast an array of nested lists like [None, [0, 1, 2, 3]] to a different type (to change the integer precision for example). In this case you'd get [[], [0, 1, 2, 3]] for example. This issue comes from PyArrow, see the discussion in https://github.com/huggingface/datasets/issues/3676
This issue also happens when no type casting is needed, because casting is supposed to be a no-op in this case. But as https://github.com/huggingface/datasets/issues/3676 shown, it's not the case and `None` are replaced by empty lists even if we cast to the exact same type.
In this PR I just workaround this bug in the case where no type casting is needed. In particular, I only call `pa.ListArray.from_arrays` only when necessary.
I also added a warning when some `None` are effectively replaced by empty lists. I wanted to raise an error in this case, but maybe we should wait a major update to do so
This PR fixes this particular case, that is occurring in `run_qa.py` in `transformers`:
```python
from datasets import Dataset
ds = Dataset.from_dict({"a": range(4)})
ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"])
print(ds.to_pandas())
# before:
# b
# 0 [None, [0]]
# 1 [[], [0]]
# 2 [[], [0]]
# 3 [[], [0]]
#
# now:
# b
# 0 [None, [0]]
# 1 [None, [0]]
# 2 [None, [0]]
# 3 [None, [0]]
```
cc @sgugger | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4282/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4282/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4281 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4281/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4281/comments | https://api.github.com/repos/huggingface/datasets/issues/4281/events | https://github.com/huggingface/datasets/pull/4281 | 1,225,556,939 | PR_kwDODunzps43TNBm | 4,281 | Remove a copy-paste sentence in dataset cards | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The non-passing tests have nothing to do with this PR."
] | 2022-05-04T15:41:55 | 2022-05-06T08:38:03 | 2022-05-04T18:33:16 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4281",
"html_url": "https://github.com/huggingface/datasets/pull/4281",
"diff_url": "https://github.com/huggingface/datasets/pull/4281.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4281.patch",
"merged_at": "2022-05-04T18:33:16"
} | Remove the following copy-paste sentence from dataset cards:
```
We show detailed information for up to 5 configurations of the dataset.
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4281/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4281/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4280 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4280/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4280/comments | https://api.github.com/repos/huggingface/datasets/issues/4280/events | https://github.com/huggingface/datasets/pull/4280 | 1,225,446,844 | PR_kwDODunzps43S2xg | 4,280 | Add missing features to commonsense_qa dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@albertvillanova it adds question_concept and id which is great. I suppose we'll talk about staying true to the format on another PR. ",
"Yes, let's merge this PR as it is: it adds missing features.\r\n\r\nA subsequent PR may address the request on changing the dataset feature structure."
] | 2022-05-04T14:24:26 | 2022-05-06T14:23:57 | 2022-05-06T14:16:46 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4280",
"html_url": "https://github.com/huggingface/datasets/pull/4280",
"diff_url": "https://github.com/huggingface/datasets/pull/4280.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4280.patch",
"merged_at": "2022-05-06T14:16:46"
} | Fix partially #4275. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4280/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4279 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4279/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4279/comments | https://api.github.com/repos/huggingface/datasets/issues/4279/events | https://github.com/huggingface/datasets/pull/4279 | 1,225,300,273 | PR_kwDODunzps43SXw5 | 4,279 | Update minimal PyArrow version warning | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-04T12:26:09 | 2022-05-05T08:50:58 | 2022-05-05T08:43:47 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4279",
"html_url": "https://github.com/huggingface/datasets/pull/4279",
"diff_url": "https://github.com/huggingface/datasets/pull/4279.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4279.patch",
"merged_at": "2022-05-05T08:43:47"
} | Update the minimal PyArrow version warning (should've been part of #4250). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4279/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4278 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4278/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4278/comments | https://api.github.com/repos/huggingface/datasets/issues/4278/events | https://github.com/huggingface/datasets/pull/4278 | 1,225,122,123 | PR_kwDODunzps43RyTs | 4,278 | Add missing features to openbookqa dataset for additional config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Let's merge this PR as it is: it adds missing features.\r\n\r\nA subsequent PR may address the request on changing the data feature structure."
] | 2022-05-04T09:22:50 | 2022-05-06T13:13:20 | 2022-05-06T13:06:01 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4278",
"html_url": "https://github.com/huggingface/datasets/pull/4278",
"diff_url": "https://github.com/huggingface/datasets/pull/4278.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4278.patch",
"merged_at": "2022-05-06T13:06:01"
} | Fix partially #4276. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4278/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4278/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4277 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4277/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4277/comments | https://api.github.com/repos/huggingface/datasets/issues/4277/events | https://github.com/huggingface/datasets/pull/4277 | 1,225,002,286 | PR_kwDODunzps43RZV9 | 4,277 | Enable label alignment for token classification datasets | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hmm, not sure why the Windows tests are failing with:\r\n\r\n```\r\nDid not find path entry C:\\tools\\miniconda3\\bin\r\nC:\\tools\\miniconda3\\envs\\py37\\python.exe: No module named pytest\r\n```\r\n\r\nEdit: running the CI again fixed the problem 🙃 ",
"> One last nit and we can merge then\r\n\r\nThanks, done!"
] | 2022-05-04T07:15:16 | 2022-05-06T15:42:15 | 2022-05-06T15:36:31 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4277",
"html_url": "https://github.com/huggingface/datasets/pull/4277",
"diff_url": "https://github.com/huggingface/datasets/pull/4277.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4277.patch",
"merged_at": "2022-05-06T15:36:31"
} | This PR extends the `Dataset.align_labels_with_mapping()` method to support alignment of label mappings between datasets and models for token classification (e.g. NER).
Example of usage:
```python
from datasets import load_dataset
ner_ds = load_dataset("conll2003", split="train")
# returns [3, 0, 7, 0, 0, 0, 7, 0, 0]
ner_ds[0]["ner_tags"]
# hypothetical model mapping with O <--> B-LOC
label2id = {
"B-LOC": "0",
"B-MISC": "7",
"B-ORG": "3",
"B-PER": "1",
"I-LOC": "6",
"I-MISC": "8",
"I-ORG": "4",
"I-PER": "2",
"O": "5"
}
ner_aligned_ds = ner_ds.align_labels_with_mapping(label2id, "ner_tags")
# returns [3, 5, 7, 5, 5, 5, 7, 5, 5]
ner_aligned_ds[0]["ner_tags"]
```
Context: we need this in AutoTrain to automatically align datasets / models during evaluation. cc @abhishekkrthakur | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4277/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4277/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4276 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4276/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4276/comments | https://api.github.com/repos/huggingface/datasets/issues/4276/events | https://github.com/huggingface/datasets/issues/4276 | 1,224,949,252 | I_kwDODunzps5JAz4E | 4,276 | OpenBookQA has missing and inconsistent field names | {
"login": "vblagoje",
"id": 458335,
"node_id": "MDQ6VXNlcjQ1ODMzNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vblagoje",
"html_url": "https://github.com/vblagoje",
"followers_url": "https://api.github.com/users/vblagoje/followers",
"following_url": "https://api.github.com/users/vblagoje/following{/other_user}",
"gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions",
"organizations_url": "https://api.github.com/users/vblagoje/orgs",
"repos_url": "https://api.github.com/users/vblagoje/repos",
"events_url": "https://api.github.com/users/vblagoje/events{/privacy}",
"received_events_url": "https://api.github.com/users/vblagoje/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @vblagoje.\r\n\r\nIndeed, I noticed some of these issues while reviewing this PR:\r\n- #4259 \r\n\r\nThis is in my TODO list. ",
"Ok, awesome @albertvillanova How about #4275 ?",
"On the other hand, I am not sure if we should always preserve the original nested structure. I think we should also consider other factors as convenience or consistency.\r\n\r\nFor example, other datasets also flatten \"question.stem\" into \"question\":\r\n- ai2_arc:\r\n ```python\r\n question = data[\"question\"][\"stem\"]\r\n choices = data[\"question\"][\"choices\"]\r\n text_choices = [choice[\"text\"] for choice in choices]\r\n label_choices = [choice[\"label\"] for choice in choices]\r\n yield id_, {\r\n \"id\": id_,\r\n \"answerKey\": answerkey,\r\n \"question\": question,\r\n \"choices\": {\"text\": text_choices, \"label\": label_choices},\r\n }\r\n ```\r\n- commonsense_qa:\r\n ```python\r\n question = data[\"question\"]\r\n stem = question[\"stem\"]\r\n yield id_, {\r\n \"answerKey\": answerkey,\r\n \"question\": stem,\r\n \"choices\": {\"label\": labels, \"text\": texts},\r\n }\r\n ```\r\n- cos_e:\r\n ```python\r\n \"question\": cqa[\"question\"][\"stem\"],\r\n ```\r\n- qasc\r\n- quartz\r\n- wiqa\r\n\r\nExceptions:\r\n- exams\r\n\r\nI think we should agree on a CONVENIENT format for QA and use always CONSISTENTLY the same.",
"@albertvillanova I agree that we should be consistent. In the last month, I have come across tons of code that deals with OpenBookQA and CommonSenseQA and all of that code relies on the original data format structure. We can't expect users to adopt HF Datasets if we arbitrarily change the structure of the format just because we think something makes more sense. I am in that position now (downloading original data rather than using HF Datasets) and undoubtedly it hinders HF Datasets' widespread use and adoption. Missing fields like in the case of #4275 is definitely bad and not even up for a discussion IMHO! cc @lhoestq ",
"I'm opening a PR that adds the missing fields.\r\n\r\nLet's agree on the feature structure: @lhoestq @mariosasko @polinaeterna ",
"IMO we should always try to preserve the original structure unless there is a good reason not to (and I don't see one in this case).",
"I agree with @mariosasko . The transition to the original format could be done in one PR for the next minor release, clearly documenting all dataset changes just as @albertvillanova outlined them above and perhaps even providing a per dataset util method to convert the new valid format to the old for backward compatibility. Users who relied on the old format will update their code with either the util method for a quick fix or slightly more elaborate for the new. ",
"I don't have a strong opinion on this, besides the fact that whatever decision we agree on, should be applied to all datasets.\r\n\r\nThere is always the tension between:\r\n- preserving each dataset original structure (which has the advantage of not forcing users to learn other structure for the same dataset),\r\n- and on the other hand performing some kind of standardization/harmonization depending on the task (this has the advantage that once learnt, the same structure applies to all datasets; this has been done for e.g. POS tagging: all datasets have been adapted to a certain \"standard\" structure).\r\n - Another advantage: datasets can easily be interchanged (or joined) to be used by the same model\r\n\r\nRecently, in the BigScience BioMedical hackathon, they adopted a different approach:\r\n- they implement a \"source\" config, respecting the original structure as much as possible\r\n- they implement additional config for each task, with a \"standard\" nested structure per task, which is most useful for users.",
"@albertvillanova, thanks for the detailed answer and the new perspectives. I understand the friction for the best design approach much better now. Ultimately, it is essential to include all the missing fields and the correct data first. Whatever approach is determined to be optimal is important but not as crucial once all the data is there, and users can create lambda functions to create whatever structure serves them best. ",
"Datasets are not tracked in this repository anymore. I think we must move this thread to the [discussions tab of the dataset](https://huggingface.co/datasets/openbookqa/discussions)",
"Indeed @osbm thanks. I'm closing this issue if it's fine for you all then"
] | 2022-05-04T05:51:52 | 2022-10-11T17:11:53 | 2022-10-05T13:50:03 | CONTRIBUTOR | null | null | null | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanScore'],
- 'clarity': row['clarity'],
- 'turkIdAnonymized': row['turkIdAnonymized']
3. Ensure the structure and every data item in the original OpenBookQA matches our OpenBookQA version.
## Expected results
The structure and every data item in the original OpenBookQA matches our OpenBookQA version.
## Actual results
TBD
## Environment info
- `datasets` version: 2.1.0
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.13
- PyArrow version: 7.0.0
- Pandas version: 1.4.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4276/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4276/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4275 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4275/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4275/comments | https://api.github.com/repos/huggingface/datasets/issues/4275/events | https://github.com/huggingface/datasets/issues/4275 | 1,224,943,414 | I_kwDODunzps5JAyc2 | 4,275 | CommonSenseQA has missing and inconsistent field names | {
"login": "vblagoje",
"id": 458335,
"node_id": "MDQ6VXNlcjQ1ODMzNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vblagoje",
"html_url": "https://github.com/vblagoje",
"followers_url": "https://api.github.com/users/vblagoje/followers",
"following_url": "https://api.github.com/users/vblagoje/following{/other_user}",
"gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions",
"organizations_url": "https://api.github.com/users/vblagoje/orgs",
"repos_url": "https://api.github.com/users/vblagoje/repos",
"events_url": "https://api.github.com/users/vblagoje/events{/privacy}",
"received_events_url": "https://api.github.com/users/vblagoje/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @vblagoje.\r\n\r\nI'm opening a PR to address this. "
] | 2022-05-04T05:38:59 | 2022-05-04T11:41:18 | null | CONTRIBUTOR | null | null | null | ## Describe the bug
In short, CommonSenseQA implementation is inconsistent with the original dataset.
More precisely, we need to:
1. Add the dataset matching "id" field. The current dataset, instead, regenerates monotonically increasing id.
2. The [“question”][“stem”] field is flattened into "question". We should match the original dataset and unflatten it
3. Add the missing "question_concept" field in the question tree node
4. Anything else? Go over the data structure of the newly repaired CommonSenseQA and make sure it matches the original
## Expected results
Every data item of the CommonSenseQA should structurally and data-wise match the original CommonSenseQA dataset.
## Actual results
TBD
## Environment info
- `datasets` version: 2.1.0
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.13
- PyArrow version: 7.0.0
- Pandas version: 1.4.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4275/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4275/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4274 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4274/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4274/comments | https://api.github.com/repos/huggingface/datasets/issues/4274/events | https://github.com/huggingface/datasets/pull/4274 | 1,224,740,303 | PR_kwDODunzps43Qm2w | 4,274 | Add API code examples for IterableDataset | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-03T22:44:17 | 2022-05-04T16:29:32 | 2022-05-04T16:22:04 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4274",
"html_url": "https://github.com/huggingface/datasets/pull/4274",
"diff_url": "https://github.com/huggingface/datasets/pull/4274.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4274.patch",
"merged_at": "2022-05-04T16:22:04"
} | This PR adds API code examples for `IterableDataset` and `IterableDatasetDicts`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4274/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4273 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4273/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4273/comments | https://api.github.com/repos/huggingface/datasets/issues/4273/events | https://github.com/huggingface/datasets/pull/4273 | 1,224,681,036 | PR_kwDODunzps43QaA6 | 4,273 | leadboard info added for TNE | {
"login": "yanaiela",
"id": 8031035,
"node_id": "MDQ6VXNlcjgwMzEwMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8031035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanaiela",
"html_url": "https://github.com/yanaiela",
"followers_url": "https://api.github.com/users/yanaiela/followers",
"following_url": "https://api.github.com/users/yanaiela/following{/other_user}",
"gists_url": "https://api.github.com/users/yanaiela/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanaiela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanaiela/subscriptions",
"organizations_url": "https://api.github.com/users/yanaiela/orgs",
"repos_url": "https://api.github.com/users/yanaiela/repos",
"events_url": "https://api.github.com/users/yanaiela/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanaiela/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-03T21:35:41 | 2022-05-05T13:25:24 | 2022-05-05T13:18:13 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4273",
"html_url": "https://github.com/huggingface/datasets/pull/4273",
"diff_url": "https://github.com/huggingface/datasets/pull/4273.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4273.patch",
"merged_at": "2022-05-05T13:18:13"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4273/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4272 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4272/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4272/comments | https://api.github.com/repos/huggingface/datasets/issues/4272/events | https://github.com/huggingface/datasets/pull/4272 | 1,224,635,660 | PR_kwDODunzps43QQQt | 4,272 | Fix typo in logging docs | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> This PR fixes #4271.\r\n\r\nThings have not changed when searching \"tqdm\" in the Dataset document. The second result still performs as \"Enable\".",
"Hi @jiangwy99, the fix will appear on the `main` version of the docs:\r\n\r\n![Screen Shot 2022-05-04 at 8 38 29 AM](https://user-images.githubusercontent.com/59462357/166718225-6848ab91-87d1-4572-9912-40a909af6cb9.png)\r\n",
"Fixed now, thanks."
] | 2022-05-03T20:47:57 | 2022-05-04T15:42:27 | 2022-05-04T06:58:36 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4272",
"html_url": "https://github.com/huggingface/datasets/pull/4272",
"diff_url": "https://github.com/huggingface/datasets/pull/4272.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4272.patch",
"merged_at": "2022-05-04T06:58:35"
} | This PR fixes #4271. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4272/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4272/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4271 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4271/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4271/comments | https://api.github.com/repos/huggingface/datasets/issues/4271/events | https://github.com/huggingface/datasets/issues/4271 | 1,224,404,403 | I_kwDODunzps5I-u2z | 4,271 | A typo in docs of datasets.disable_progress_bar | {
"login": "jiangwy99",
"id": 39762734,
"node_id": "MDQ6VXNlcjM5NzYyNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangwy99",
"html_url": "https://github.com/jiangwy99",
"followers_url": "https://api.github.com/users/jiangwy99/followers",
"following_url": "https://api.github.com/users/jiangwy99/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangwy99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangwy99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangwy99/subscriptions",
"organizations_url": "https://api.github.com/users/jiangwy99/orgs",
"repos_url": "https://api.github.com/users/jiangwy99/repos",
"events_url": "https://api.github.com/users/jiangwy99/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangwy99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi! Thanks for catching and reporting the typo, a PR has been opened to fix it :)"
] | 2022-05-03T17:44:56 | 2022-05-04T06:58:35 | 2022-05-04T06:58:35 | NONE | null | null | null | ## Describe the bug
in the docs of V2.1.0 datasets.disable_progress_bar, we should replace "enable" with "disable". | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4271/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4270 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4270/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4270/comments | https://api.github.com/repos/huggingface/datasets/issues/4270/events | https://github.com/huggingface/datasets/pull/4270 | 1,224,244,460 | PR_kwDODunzps43PC5V | 4,270 | Fix style in openbookqa dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-03T15:21:34 | 2022-05-06T08:38:06 | 2022-05-03T16:20:52 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4270",
"html_url": "https://github.com/huggingface/datasets/pull/4270",
"diff_url": "https://github.com/huggingface/datasets/pull/4270.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4270.patch",
"merged_at": "2022-05-03T16:20:52"
} | CI in PR:
- #4259
was green, but after merging it to master, a code quality error appeared. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4270/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4270/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4269 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4269/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4269/comments | https://api.github.com/repos/huggingface/datasets/issues/4269/events | https://github.com/huggingface/datasets/pull/4269 | 1,223,865,145 | PR_kwDODunzps43Nzwh | 4,269 | Add license and point of contact to big_patent dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-03T09:24:07 | 2022-05-06T08:38:09 | 2022-05-03T11:16:19 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4269",
"html_url": "https://github.com/huggingface/datasets/pull/4269",
"diff_url": "https://github.com/huggingface/datasets/pull/4269.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4269.patch",
"merged_at": "2022-05-03T11:16:19"
} | Update metadata of big_patent dataset with:
- license
- point of contact | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4269/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4268 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4268/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4268/comments | https://api.github.com/repos/huggingface/datasets/issues/4268/events | https://github.com/huggingface/datasets/issues/4268 | 1,223,331,964 | I_kwDODunzps5I6pB8 | 4,268 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered | {
"login": "i-am-neo",
"id": 102043285,
"node_id": "U_kgDOBhUOlQ",
"avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/i-am-neo",
"html_url": "https://github.com/i-am-neo",
"followers_url": "https://api.github.com/users/i-am-neo/followers",
"following_url": "https://api.github.com/users/i-am-neo/following{/other_user}",
"gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions",
"organizations_url": "https://api.github.com/users/i-am-neo/orgs",
"repos_url": "https://api.github.com/users/i-am-neo/repos",
"events_url": "https://api.github.com/users/i-am-neo/events{/privacy}",
"received_events_url": "https://api.github.com/users/i-am-neo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"It would help a lot to be able to preview the dataset - I'd like to see if the pronunciations are in the dataset, eg. for [\"word\"](https://en.wiktionary.org/wiki/word),\r\n\r\nPronunciation\r\n([Received Pronunciation](https://en.wikipedia.org/wiki/Received_Pronunciation)) [IPA](https://en.wiktionary.org/wiki/Wiktionary:International_Phonetic_Alphabet)([key](https://en.wiktionary.org/wiki/Appendix:English_pronunciation)): /wɜːd/\r\n([General American](https://en.wikipedia.org/wiki/General_American)) [enPR](https://en.wiktionary.org/wiki/Appendix:English_pronunciation): wûrd, [IPA](https://en.wiktionary.org/wiki/Wiktionary:International_Phonetic_Alphabet)([key](https://en.wiktionary.org/wiki/Appendix:English_pronunciation)): /wɝd/",
"Hi @i-am-neo, thanks for reporting.\r\n\r\nNormally this dataset should be private and not accessible for public use. @cakiki, @lvwerra, any reason why is it public? I see many other Wikimedia datasets are also public.\r\n\r\nAlso note that last commit \"Add metadata\" (https://huggingface.co/datasets/bigscience-catalogue-lm-data/lm_en_wiktionary_filtered/commit/dc2f458dab50e00f35c94efb3cd4009996858609) introduced buggy data files (`data/file-01.jsonl.gz.lock`, `data/file-01.jsonl.gz.lock.lock`). The same bug appears in other datasets as well.\r\n\r\n@i-am-neo, please note that in the near future we are planning to make public all datasets used for the BigScience project (at least all of them whose license allows to do that). Once public, they will be accessible for all the NLP community.",
"Ah this must be a bug introduced at creation time since the repos were created programmatically; I'll go ahead and make them private; sorry about that!",
"All datasets are private now. \r\n\r\nRe:that bug I think we're currently avoiding it by avoiding verifications. (i.e. `ignore_verifications=True`)",
"Thanks a lot, @cakiki.\r\n\r\n@i-am-neo, I'm closing this issue for now because the dataset is not publicly available yet. Just stay tuned, as we will soon release all the BigScience open-license datasets. ",
"Thanks for letting me know, @albertvillanova @cakiki.\r\nAny chance of having a subset alpha version in the meantime? \r\nI only need two dicts out of wiktionary: 1) phoneme(as key): word, and 2) word(as key): its phonemes.\r\n\r\nWould like to use it for a mini-poc [Robust ASR](https://github.com/huggingface/transformers/issues/13162#issuecomment-1096881290) decoding, cc @patrickvonplaten. \r\n\r\n(Patrick, possible to email you so as not to litter github with comments? I have some observations after experiments training hubert on some YT AMI-like data (11.44% wer). Also wonder if a robust ASR is on your/HG's roadmap). Thanks!",
"Hey @i-am-neo,\r\n\r\nCool to hear that you're working on Robust ASR! Feel free to drop me a mail :-)",
"@i-am-neo This particular subset of the dataset was taken from the [CirrusSearch dumps](https://dumps.wikimedia.org/other/cirrussearch/current/)\r\nYou're specifically after the [enwiktionary-20220425-cirrussearch-content.json.gz](https://dumps.wikimedia.org/other/cirrussearch/current/enwiktionary-20220425-cirrussearch-content.json.gz) file",
"thanks @cakiki ! <del>I could access the gz file yesterday (but neglected to tuck it away somewhere safe), and today the link is throwing a 404. Can you help? </del> Never mind, got it!",
"thanks @patrickvonplaten. will do - getting my observations together."
] | 2022-05-02T20:34:25 | 2022-05-06T15:53:30 | 2022-05-03T11:23:48 | NONE | null | null | null | ## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
## Expected results
A clear and concise description of the expected results.
## Actual results
```
ExpectedMoreDownloadedFiles Traceback (most recent call last)
[<ipython-input-62-4ac5cf959477>](https://localhost:8080/#) in <module>()
1 from datasets import load_dataset
2
----> 3 dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
3 frames
[/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_checksums(expected_checksums, recorded_checksums, verification_name)
31 return
32 if len(set(expected_checksums) - set(recorded_checksums)) > 0:
---> 33 raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))
34 if len(set(recorded_checksums) - set(expected_checksums)) > 0:
35 raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums)))
ExpectedMoreDownloadedFiles: {'/home/leandro/catalogue_data/datasets/lm_en_wiktionary_filtered/data/file-01.jsonl.gz', '/home/leandro/catalogue_data/datasets/lm_en_wiktionary_filtered/data/file-01.jsonl.gz.lock'}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4268/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4268/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4267 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4267/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4267/comments | https://api.github.com/repos/huggingface/datasets/issues/4267/events | https://github.com/huggingface/datasets/pull/4267 | 1,223,214,275 | PR_kwDODunzps43LzOR | 4,267 | Replace data URL in SAMSum dataset within the same repository | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-02T18:38:08 | 2022-05-06T08:38:13 | 2022-05-02T19:03:49 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4267",
"html_url": "https://github.com/huggingface/datasets/pull/4267",
"diff_url": "https://github.com/huggingface/datasets/pull/4267.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4267.patch",
"merged_at": "2022-05-02T19:03:49"
} | Replace data URL with one in the same repository. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4267/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4266 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4266/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4266/comments | https://api.github.com/repos/huggingface/datasets/issues/4266/events | https://github.com/huggingface/datasets/pull/4266 | 1,223,116,436 | PR_kwDODunzps43LeXK | 4,266 | Add HF Speech Bench to Librispeech Dataset Card | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-02T16:59:31 | 2022-05-05T08:47:20 | 2022-05-05T08:40:09 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4266",
"html_url": "https://github.com/huggingface/datasets/pull/4266",
"diff_url": "https://github.com/huggingface/datasets/pull/4266.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4266.patch",
"merged_at": "2022-05-05T08:40:09"
} | Adds the HF Speech Bench to Librispeech Dataset Card in place of the Papers With Code Leaderboard. Should improve usage and visibility of this leaderboard! Wondering whether this can also be done for [Common Voice 7](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) and [8](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) through someone with permissions?
cc @patrickvonplaten: more leaderboard promotion! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4266/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4266/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4263 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4263/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4263/comments | https://api.github.com/repos/huggingface/datasets/issues/4263/events | https://github.com/huggingface/datasets/pull/4263 | 1,222,723,083 | PR_kwDODunzps43KLnD | 4,263 | Rename imagenet2012 -> imagenet-1k | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> Later we can add imagenet-21k as a new dataset if we want.\r\n\r\nisn't it what models refer to as `imagenet` already?",
"> isn't it what models refer to as imagenet already?\r\n\r\nI wasn't sure, but it looks like it indeed. Therefore having a dataset `imagenet` for ImageNet 21k makes sense actually.\r\n\r\nEDIT: actually not all `imagenet` tag refer to ImageNet 21k - we will need to correct some of them",
"_The documentation is not available anymore as the PR was closed or merged._",
"should we remove the repo mirror on the hub side or will you do it?"
] | 2022-05-02T10:26:21 | 2022-05-02T17:50:46 | 2022-05-02T16:32:57 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4263",
"html_url": "https://github.com/huggingface/datasets/pull/4263",
"diff_url": "https://github.com/huggingface/datasets/pull/4263.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4263.patch",
"merged_at": "2022-05-02T16:32:57"
} | On the Hugging Face Hub, users refer to imagenet2012 (from #4178 ) as imagenet-1k in their model tags.
To correctly link models to imagenet, we should rename this dataset `imagenet-1k`.
Later we can add `imagenet-21k` as a new dataset if we want.
Once this one is merged we can delete the `imagenet2012` dataset repository on the Hub.
EDIT: to complete the rationale on why we should name it `imagenet-1k`:
If users specifically added the tag `imagenet-1k` , then it could be for two reasons (not sure which one is predominant), either they
- wanted to make it explicit that it’s not 21k -> the distinction is important for the community
- or they have been following this convention from other models -> the convention implicitly exists already | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4263/reactions",
"total_count": 4,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4263/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4262 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4262/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4262/comments | https://api.github.com/repos/huggingface/datasets/issues/4262/events | https://github.com/huggingface/datasets/pull/4262 | 1,222,130,749 | PR_kwDODunzps43IOye | 4,262 | Add YAML tags to Dataset Card rotten tomatoes | {
"login": "mo6zes",
"id": 10004251,
"node_id": "MDQ6VXNlcjEwMDA0MjUx",
"avatar_url": "https://avatars.githubusercontent.com/u/10004251?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mo6zes",
"html_url": "https://github.com/mo6zes",
"followers_url": "https://api.github.com/users/mo6zes/followers",
"following_url": "https://api.github.com/users/mo6zes/following{/other_user}",
"gists_url": "https://api.github.com/users/mo6zes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mo6zes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mo6zes/subscriptions",
"organizations_url": "https://api.github.com/users/mo6zes/orgs",
"repos_url": "https://api.github.com/users/mo6zes/repos",
"events_url": "https://api.github.com/users/mo6zes/events{/privacy}",
"received_events_url": "https://api.github.com/users/mo6zes/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-01T11:59:08 | 2022-05-03T14:27:33 | 2022-05-03T14:20:35 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4262",
"html_url": "https://github.com/huggingface/datasets/pull/4262",
"diff_url": "https://github.com/huggingface/datasets/pull/4262.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4262.patch",
"merged_at": "2022-05-03T14:20:35"
} | The dataset card for the rotten tomatoes / MR movie review dataset had some missing YAML tags. Hopefully, this also improves the visibility of this dataset now that paperswithcode and huggingface link to eachother. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4262/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4262/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4261 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4261/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4261/comments | https://api.github.com/repos/huggingface/datasets/issues/4261/events | https://github.com/huggingface/datasets/issues/4261 | 1,221,883,779 | I_kwDODunzps5I1HeD | 4,261 | data leakage in `webis/conclugen` dataset | {
"login": "xflashxx",
"id": 54585776,
"node_id": "MDQ6VXNlcjU0NTg1Nzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/54585776?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xflashxx",
"html_url": "https://github.com/xflashxx",
"followers_url": "https://api.github.com/users/xflashxx/followers",
"following_url": "https://api.github.com/users/xflashxx/following{/other_user}",
"gists_url": "https://api.github.com/users/xflashxx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xflashxx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xflashxx/subscriptions",
"organizations_url": "https://api.github.com/users/xflashxx/orgs",
"repos_url": "https://api.github.com/users/xflashxx/repos",
"events_url": "https://api.github.com/users/xflashxx/events{/privacy}",
"received_events_url": "https://api.github.com/users/xflashxx/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @xflashxx, thanks for reporting.\r\n\r\nPlease note that this dataset was generated and shared by Webis Group: https://huggingface.co/webis\r\n\r\nWe are contacting the dataset owners to inform them about the issue you found. We'll keep you updated of their reply.",
"i'd suggest just pinging the authors here in the issue if possible?",
"Thanks for reporting this @xflashxx. I'll have a look and get back to you on this.",
"Hi @xflashxx and @albertvillanova,\r\n\r\nI have updated the files with de-duplicated splits. Apparently the debate portals from which part of the examples were sourced had unique timestamps for some examples (up to 6%; updated counts in the README) without any actual content updated that lead to \"new\" items. The length of `ids_validation` and `ids_testing` is zero.\r\n\r\nRegarding impact on scores:\r\n1. We employed automatic evaluation (on a separate set of 1000 examples) only to justify the exclusion of the smaller models for manual evaluation (due to budget constraints). I am confident the ranking still stands (unsurprisingly, the bigger models doing better than those trained on the smaller splits). We also highlight this in the paper. \r\n\r\n2. The examples used for manual evaluation have no overlap with any splits (also because they do not have any ground truth as we applied the trained models on an unlabeled sample to test its practical usage). I've added these two files to the dataset repository.\r\n\r\nHope this helps!",
"Thanks @shahbazsyed for your fast fix.\r\n\r\nAs a side note:\r\n- Your email appearing as Point of Contact in the dataset README has a typo: @uni.leipzig.de instead of @uni-leipzig.de\r\n- Your commits on the Hub are not linked to your profile on the Hub: this is because we use the email address to make this link; the email address used in your commit author and the email address set on your Hub account settings."
] | 2022-04-30T17:43:37 | 2022-05-03T06:04:26 | 2022-05-03T06:04:26 | NONE | null | null | null | ## Describe the bug
Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results.
Furthermore, all splits contain duplicate samples.
## Steps to reproduce the bug
```python
from datasets import load_dataset
training = load_dataset("webis/conclugen", "base", split="train")
validation = load_dataset("webis/conclugen", "base", split="validation")
testing = load_dataset("webis/conclugen", "base", split="test")
# collect which sample id's are present in the training split
ids_validation = list()
ids_testing = list()
for train_sample in training:
train_argument = train_sample["argument"]
train_conclusion = train_sample["conclusion"]
train_id = train_sample["id"]
# test if current sample is in validation split
if train_argument in validation["argument"]:
for validation_sample in validation:
validation_argument = validation_sample["argument"]
validation_conclusion = validation_sample["conclusion"]
validation_id = validation_sample["id"]
if train_argument == validation_argument and train_conclusion == validation_conclusion:
ids_validation.append(validation_id)
# test if current sample is in test split
if train_argument in testing["argument"]:
for testing_sample in testing:
testing_argument = testing_sample["argument"]
testing_conclusion = testing_sample["conclusion"]
testing_id = testing_sample["id"]
if train_argument == testing_argument and train_conclusion == testing_conclusion:
ids_testing.append(testing_id)
```
## Expected results
Length of both lists `ids_validation` and `ids_testing` should be zero.
## Actual results
Length of `ids_validation` = `2556`
Length of `ids_testing` = `287`
Furthermore, there seems to be duplicate samples in (at least) the *training* split, since:
`print(len(set(ids_validation)))` = `950`
`print(len(set(ids_testing)))` = `101`
All in all, around 7% of the samples of each the *validation* and *test* split seems to be present in the *training* split.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4
- Platform: macOS-12.3.1-arm64-arm-64bit
- Python version: 3.9.10
- PyArrow version: 7.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4261/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4260 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4260/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4260/comments | https://api.github.com/repos/huggingface/datasets/issues/4260/events | https://github.com/huggingface/datasets/pull/4260 | 1,221,830,292 | PR_kwDODunzps43HSfs | 4,260 | Add mr_polarity movie review sentiment classification | {
"login": "mo6zes",
"id": 10004251,
"node_id": "MDQ6VXNlcjEwMDA0MjUx",
"avatar_url": "https://avatars.githubusercontent.com/u/10004251?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mo6zes",
"html_url": "https://github.com/mo6zes",
"followers_url": "https://api.github.com/users/mo6zes/followers",
"following_url": "https://api.github.com/users/mo6zes/following{/other_user}",
"gists_url": "https://api.github.com/users/mo6zes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mo6zes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mo6zes/subscriptions",
"organizations_url": "https://api.github.com/users/mo6zes/orgs",
"repos_url": "https://api.github.com/users/mo6zes/repos",
"events_url": "https://api.github.com/users/mo6zes/events{/privacy}",
"received_events_url": "https://api.github.com/users/mo6zes/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"whoops just found https://huggingface.co/datasets/rotten_tomatoes"
] | 2022-04-30T13:19:33 | 2022-04-30T14:16:25 | 2022-04-30T14:16:25 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4260",
"html_url": "https://github.com/huggingface/datasets/pull/4260",
"diff_url": "https://github.com/huggingface/datasets/pull/4260.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4260.patch",
"merged_at": null
} | Add the MR (Movie Review) dataset. The original dataset contains sentences from Rotten Tomatoes labeled as either "positive" or "negative".
Homepage: [https://www.cs.cornell.edu/people/pabo/movie-review-data/](https://www.cs.cornell.edu/people/pabo/movie-review-data/)
paperswithcode: [https://paperswithcode.com/dataset/mr](https://paperswithcode.com/dataset/mr)
- [ ] I was not able to generate dummy data, the original dataset files have ".pos" and ".neg" as file extensions so the auto-generator does not work. Is it fine like this or should dummy data be added?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4260/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4259 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4259/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4259/comments | https://api.github.com/repos/huggingface/datasets/issues/4259/events | https://github.com/huggingface/datasets/pull/4259 | 1,221,768,025 | PR_kwDODunzps43HHGc | 4,259 | Fix bug in choices labels in openbookqa dataset | {
"login": "manandey",
"id": 6687858,
"node_id": "MDQ6VXNlcjY2ODc4NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6687858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manandey",
"html_url": "https://github.com/manandey",
"followers_url": "https://api.github.com/users/manandey/followers",
"following_url": "https://api.github.com/users/manandey/following{/other_user}",
"gists_url": "https://api.github.com/users/manandey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manandey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manandey/subscriptions",
"organizations_url": "https://api.github.com/users/manandey/orgs",
"repos_url": "https://api.github.com/users/manandey/repos",
"events_url": "https://api.github.com/users/manandey/events{/privacy}",
"received_events_url": "https://api.github.com/users/manandey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-30T07:41:39 | 2022-05-04T06:31:31 | 2022-05-03T15:14:21 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4259",
"html_url": "https://github.com/huggingface/datasets/pull/4259",
"diff_url": "https://github.com/huggingface/datasets/pull/4259.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4259.patch",
"merged_at": "2022-05-03T15:14:21"
} | This PR fixes the Bug in the openbookqa dataset as mentioned in this issue #3550.
Fix #3550.
cc. @lhoestq @mariosasko | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4259/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4259/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4258 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4258/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4258/comments | https://api.github.com/repos/huggingface/datasets/issues/4258/events | https://github.com/huggingface/datasets/pull/4258 | 1,221,637,727 | PR_kwDODunzps43Gstg | 4,258 | Fix/start token mask issue and update documentation | {
"login": "TristanThrush",
"id": 20826878,
"node_id": "MDQ6VXNlcjIwODI2ODc4",
"avatar_url": "https://avatars.githubusercontent.com/u/20826878?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TristanThrush",
"html_url": "https://github.com/TristanThrush",
"followers_url": "https://api.github.com/users/TristanThrush/followers",
"following_url": "https://api.github.com/users/TristanThrush/following{/other_user}",
"gists_url": "https://api.github.com/users/TristanThrush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TristanThrush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TristanThrush/subscriptions",
"organizations_url": "https://api.github.com/users/TristanThrush/orgs",
"repos_url": "https://api.github.com/users/TristanThrush/repos",
"events_url": "https://api.github.com/users/TristanThrush/events{/privacy}",
"received_events_url": "https://api.github.com/users/TristanThrush/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Good catch ! Thanks :)\r\n> \r\n> Next time can you describe your fix in the Pull Request description please ?\r\n\r\nThanks. Also whoops, sorry about not being very descriptive. I updated the pull request description, and will keep this in mind for future PRs."
] | 2022-04-29T22:42:44 | 2022-05-02T16:33:20 | 2022-05-02T16:26:12 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4258",
"html_url": "https://github.com/huggingface/datasets/pull/4258",
"diff_url": "https://github.com/huggingface/datasets/pull/4258.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4258.patch",
"merged_at": "2022-05-02T16:26:12"
} | This pr fixes a couple bugs:
1) the perplexity was calculated with a 0 in the attention mask for the start token, which was causing high perplexity scores that were not correct
2) the documentation was not updated | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4258/timeline | null | null | true |