id
int64 1.14B
2.23B
| labels_url
stringlengths 75
75
| body
stringlengths 2
33.9k
⌀ | updated_at
stringlengths 20
20
| number
int64 3.76k
6.79k
| milestone
dict | repository_url
stringclasses 1
value | draft
bool 2
classes | labels
listlengths 0
4
| created_at
stringlengths 20
20
| comments_url
stringlengths 70
70
| assignee
dict | timeline_url
stringlengths 70
70
| title
stringlengths 1
290
| events_url
stringlengths 68
68
| active_lock_reason
null | user
dict | assignees
listlengths 0
3
| performed_via_github_app
null | state_reason
stringclasses 3
values | author_association
stringclasses 3
values | closed_at
stringlengths 20
20
⌀ | pull_request
dict | node_id
stringlengths 18
19
| comments
sequencelengths 0
30
| reactions
dict | state
stringclasses 2
values | locked
bool 1
class | url
stringlengths 61
61
| html_url
stringlengths 49
51
| is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,231,400,200 | https://api.github.com/repos/huggingface/datasets/issues/6793/labels{/name} | ### Describe the bug
I'd expect the following code to download just the validation split but instead I get all data on my disk (train, test and validation splits)
`
from datasets import load_dataset
dataset = load_dataset("imagenet-1k", split="validation", trust_remote_code=True)
`
Is it expected to work like that?
### Steps to reproduce the bug
1. Install the required libraries (python, datasets, huggingface_hub)
2. Login using huggingface cli
2. Run the code in the description
### Expected behavior
Just a single (validation) split should be downloaded.
### Environment info
python: 3.12.2
datasets: 2.18.0
huggingface_hub: 0.22.2 | 2024-04-08T14:39:14Z | 6,793 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-08T14:39:14Z | https://api.github.com/repos/huggingface/datasets/issues/6793/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6793/timeline | Loading just one particular split is not possible for imagenet-1k | https://api.github.com/repos/huggingface/datasets/issues/6793/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/165930106?v=4",
"events_url": "https://api.github.com/users/PaulPSta/events{/privacy}",
"followers_url": "https://api.github.com/users/PaulPSta/followers",
"following_url": "https://api.github.com/users/PaulPSta/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulPSta/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PaulPSta",
"id": 165930106,
"login": "PaulPSta",
"node_id": "U_kgDOCePkeg",
"organizations_url": "https://api.github.com/users/PaulPSta/orgs",
"received_events_url": "https://api.github.com/users/PaulPSta/received_events",
"repos_url": "https://api.github.com/users/PaulPSta/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PaulPSta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulPSta/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PaulPSta"
} | [] | null | null | NONE | null | null | I_kwDODunzps6FAHcI | [] | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6793/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6793 | https://github.com/huggingface/datasets/issues/6793 | false |
2,231,318,682 | https://api.github.com/repos/huggingface/datasets/issues/6792/labels{/name} | It was reloading from the wrong cache dir because of a bug in `_check_legacy_cache2`. This function should not trigger if there are config_kwars like `sample_by=`
fix https://github.com/huggingface/datasets/issues/6758 | 2024-04-08T15:55:21Z | 6,792 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-04-08T14:05:42Z | https://api.github.com/repos/huggingface/datasets/issues/6792/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6792/timeline | Fix cache conflict in `_check_legacy_cache2` | https://api.github.com/repos/huggingface/datasets/issues/6792/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6792.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6792",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6792.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6792"
} | PR_kwDODunzps5sBEyn | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6792). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6792/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6792 | https://github.com/huggingface/datasets/pull/6792 | true |
2,230,102,332 | https://api.github.com/repos/huggingface/datasets/issues/6791/labels{/name} | ### Describe the bug
Calling `add_faiss_index` on a `Dataset` with a column argument raises a ValueError. The following is the trace
```python
214 def replacement_add(self, x):
215 """Adds vectors to the index.
216 The index must be trained before vectors can be added to it.
217 The vectors are implicitly numbered in sequence. When `n` vectors are
(...)
224 `dtype` must be float32.
225 """
--> 227 n, d = x.shape
228 assert d == self.d
229 x = np.ascontiguousarray(x, dtype='float32')
ValueError: not enough values to unpack (expected 2, got 1)
```
### Steps to reproduce the bug
1. Load any dataset like `ds = datasets.load_dataset("wikimedia/wikipedia", "20231101.en")["train"]`
2. Add an FAISS index on any column `ds.add_faiss_index('title')`
### Expected behavior
The index should be created
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-6.5.0-26-generic-x86_64-with-glibc2.35
- Python version: 3.9.19
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0
- `faiss-cpu` version: 1.8.0 | 2024-04-09T01:30:55Z | 6,791 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-08T01:57:03Z | https://api.github.com/repos/huggingface/datasets/issues/6791/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6791/timeline | `add_faiss_index` raises ValueError: not enough values to unpack (expected 2, got 1) | https://api.github.com/repos/huggingface/datasets/issues/6791/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/40491005?v=4",
"events_url": "https://api.github.com/users/NeuralFlux/events{/privacy}",
"followers_url": "https://api.github.com/users/NeuralFlux/followers",
"following_url": "https://api.github.com/users/NeuralFlux/following{/other_user}",
"gists_url": "https://api.github.com/users/NeuralFlux/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NeuralFlux",
"id": 40491005,
"login": "NeuralFlux",
"node_id": "MDQ6VXNlcjQwNDkxMDA1",
"organizations_url": "https://api.github.com/users/NeuralFlux/orgs",
"received_events_url": "https://api.github.com/users/NeuralFlux/received_events",
"repos_url": "https://api.github.com/users/NeuralFlux/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NeuralFlux/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NeuralFlux/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NeuralFlux"
} | [] | null | null | NONE | null | null | I_kwDODunzps6E7Kk8 | [
"I realized I was passing a string column to this instead of float. Is it possible to add a warning or error to prevent users from falsely believing there's a bug?",
"Hello!\r\n\r\nI agree that we could add some safeguards around the type of `ds[column]`. At least for FAISS, we need the column to be made of embeddings as FAISS doesn't perform the embeddings itself.\r\n\r\nI can propose a PR sometime this week.",
"@Dref360 thanks for the initiative!"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6791/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6791 | https://github.com/huggingface/datasets/issues/6791 | false |
2,229,915,236 | https://api.github.com/repos/huggingface/datasets/issues/6790/labels{/name} | ### Describe the bug
Hello,
I've been struggling with a problem using Huggingface datasets caused by PyArrow memory allocation. I finally managed to solve it, and thought to document it since similar issues have been raised here before (https://github.com/huggingface/datasets/issues/5710, https://github.com/huggingface/datasets/issues/6176).
In my case, I was trying to load ~70k dataset files from disk using `datasets.load_from_disk(data_path)` (meaning 70k repeated calls to load_from_disk). This triggered an (uninformative) exception around 64k loaded files:
```
File "pyarrow/io.pxi", line 1053, in pyarrow.lib.memory_map
File "pyarrow/io.pxi", line 1000, in pyarrow.lib.MemoryMappedFile._open
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
OSError: Memory mapping file failed: Cannot allocate memory
```
Despite system RAM usage being very low. After a lot of digging around, I discovered that my Ubuntu machine had a limit on the maximum number of memory mapped files in `/proc/sys/vm/max_map_count` set to 65530, which was causing my data loader to crash. Increasing the limit in the file (`echo <new_mmap_size> | sudo tee /proc/sys/vm/max_map_count`) made the issue go away.
While this isn't a bug as such in either Datasets or PyArrow, this behavior can be very confusing to users. Maybe this should be mentioned in documentation? I suspect the other issues raised here about memory mapping OOM errors could actually be consequence of system configuration.
Br,
Lauri
### Steps to reproduce the bug
```
import numpy as np
import pyarrow as pa
import tqdm
# Write some data to disk
arr = pa.array(np.arange(100))
schema = pa.schema([
pa.field('nums', arr.type)
])
with pa.OSFile('arraydata.arrow', 'wb') as sink:
with pa.ipc.new_file(sink, schema=schema) as writer:
batch = pa.record_batch([arr], schema=schema)
writer.write(batch)
# Number of times to open the memory map
nums = 70000
# Read the data back
arrays = [pa.memory_map('arraydata.arrow', 'r') for _ in tqdm.tqdm(range(nums))]
```
### Expected behavior
No errors.
### Environment info
datasets: 2.18.0
pyarrow: 15.0.0 | 2024-04-07T20:00:54Z | 6,790 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-07T19:25:39Z | https://api.github.com/repos/huggingface/datasets/issues/6790/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6790/timeline | PyArrow 'Memory mapping file failed: Cannot allocate memory' bug | https://api.github.com/repos/huggingface/datasets/issues/6790/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/25725697?v=4",
"events_url": "https://api.github.com/users/lasuomela/events{/privacy}",
"followers_url": "https://api.github.com/users/lasuomela/followers",
"following_url": "https://api.github.com/users/lasuomela/following{/other_user}",
"gists_url": "https://api.github.com/users/lasuomela/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lasuomela",
"id": 25725697,
"login": "lasuomela",
"node_id": "MDQ6VXNlcjI1NzI1Njk3",
"organizations_url": "https://api.github.com/users/lasuomela/orgs",
"received_events_url": "https://api.github.com/users/lasuomela/received_events",
"repos_url": "https://api.github.com/users/lasuomela/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lasuomela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lasuomela/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lasuomela"
} | [] | null | null | NONE | null | null | I_kwDODunzps6E6c5k | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6790/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6790 | https://github.com/huggingface/datasets/issues/6790 | false |
2,229,527,001 | https://api.github.com/repos/huggingface/datasets/issues/6789/labels{/name} | ### Describe the bug
Map has been taking extremely long to preprocess my data.
It seems to process 1000 examples (which it does really fast in about 10 seconds), then it hangs for a good 1-2 minutes, before it moves on to the next batch of 1000 examples.
It also keeps eating up my hard drive space for some reason by creating a file named tmp1335llua that is over 300GB.
Trying to set num_proc to be >1 also gives me the following error: NameError: name 'processor' is not defined
Please advise on how I could optimise this?
### Steps to reproduce the bug
In general, I have been using map as per normal. Here is a snippet of my code:
````
########################### DATASET LOADING AND PREP #########################
def load_custom_dataset(split):
ds = []
if split == 'train':
for dset in args.train_datasets:
ds.append(load_from_disk(dset))
if split == 'test':
for dset in args.test_datasets:
ds.append(load_from_disk(dset))
ds_to_return = concatenate_datasets(ds)
ds_to_return = ds_to_return.shuffle(seed=22)
return ds_to_return
def prepare_dataset(batch):
# load and (possibly) resample audio data to 16kHz
audio = batch["audio"]
# compute log-Mel input features from input audio array
batch["input_features"] = processor.feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0]
# compute input length of audio sample in seconds
batch["input_length"] = len(audio["array"]) / audio["sampling_rate"]
# optional pre-processing steps
transcription = batch["sentence"]
if do_lower_case:
transcription = transcription.lower()
if do_remove_punctuation:
transcription = normalizer(transcription).strip()
# encode target text to label ids
batch["labels"] = processor.tokenizer(transcription).input_ids
return batch
print('DATASET PREPARATION IN PROGRESS...')
# case 3: combine_and_shuffle is true, only train provided
# load train datasets
train_set = load_custom_dataset('train')
# split dataset
raw_dataset = DatasetDict()
raw_dataset = train_set.train_test_split(test_size = args.test_size, shuffle=True, seed=42)
raw_dataset = raw_dataset.cast_column("audio", Audio(sampling_rate=args.sampling_rate))
print("Before Map:")
print(raw_dataset)
raw_dataset = raw_dataset.map(prepare_dataset, num_proc=1)
print("After Map:")
print(raw_dataset)
````
### Expected behavior
Based on the speed at which map is processing examples, I would expect a 5-6 hours completion for all mapping
However, because it hangs every 1000 examples, I instead roughly estimate it would take about 40 hours!
Moreover, i cant even finish the map because it keeps exponentially eating up my hard drive space
### Environment info
- `datasets` version: 2.18.0
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.10.14
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0 | 2024-04-08T09:37:28Z | 6,789 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-07T02:52:06Z | https://api.github.com/repos/huggingface/datasets/issues/6789/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6789/timeline | Issue with map | https://api.github.com/repos/huggingface/datasets/issues/6789/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/102672238?v=4",
"events_url": "https://api.github.com/users/Nsohko/events{/privacy}",
"followers_url": "https://api.github.com/users/Nsohko/followers",
"following_url": "https://api.github.com/users/Nsohko/following{/other_user}",
"gists_url": "https://api.github.com/users/Nsohko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Nsohko",
"id": 102672238,
"login": "Nsohko",
"node_id": "U_kgDOBh6nbg",
"organizations_url": "https://api.github.com/users/Nsohko/orgs",
"received_events_url": "https://api.github.com/users/Nsohko/received_events",
"repos_url": "https://api.github.com/users/Nsohko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Nsohko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nsohko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Nsohko"
} | [] | null | null | NONE | null | null | I_kwDODunzps6E4-HZ | [
"Default `writer_batch_size `is set to 1000 (see [map](https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/main_classes#datasets.Dataset.map)).\r\nThe \"tmp1335llua\" is probably the temp file it creates while writing to disk.\r\nMaybe try lowering the `writer_batch_size`.\r\n\r\nFor multi-processing you should probably pass the `processor `as an argument (with e.g. partial) to the function or create it inside so that the sub-processes have access to it and maybe add `if __name__ == \"__main__\"` (not sure that's necessary?).\r\n",
"Hi @Modexus,\r\n\r\nThank you very much for the help! Yep after playing around with map, I managed to get the parallel processing to work by implementing it like you suggested.\r\n\r\nRegarding the temp files, it seems like the temp files just keep growing in size as the map continues. Eventually, once map finishes, the temp files are deleted, but they are instead saved as cache .arrow files. These cache files are absolutely gigantic (~ 30-50x the size of the initial dataset!).\r\n\r\nAfter playing around with the `prepare_dataset()` function above, it seems this issue is caused by the following line in the function, where the log-Mel spectrogram of the audio is calculated:\r\n\r\n`# compute log-Mel input features from input audio array\r\n batch[\"input_features\"] = processor.feature_extractor(audio[\"array\"], \r\n sampling_rate=audio[\"sampling_rate\"]).input_features[0]\r\n`\r\n\r\nWhen I remove this line, the final cache files are approximately the same size as the initial dataset.\r\n\r\nCan I check whether this is expected behavior with the whisper feature extractor? I cant imagine the spectrograms are that large!\r\n\r\nThank you so much for the help!"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6789/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6789 | https://github.com/huggingface/datasets/issues/6789 | false |
2,229,207,521 | https://api.github.com/repos/huggingface/datasets/issues/6788/labels{/name} | ### Describe the bug
Hello,
I have a question regarding the map function in the Hugging Face datasets.
The situation is as follows: when I load a jsonl file using load_dataset(..., streaming=False), and then utilize the map function to process it, I specify that the returned example should be of type Torch.tensor. However, I noticed that after applying the map function, the datatype automatically changes to List, which leads to errors in my program.
I attempted to use load_dataset(..., streaming=True), and this issue no longer occurs. I'm not entirely clear on why this happens. Could you please provide some insights into this?
### Steps to reproduce the bug
1.dataset = load_dataset(xxx, streaming = False)
2. dataset.map(function), function will return torch.Tensor.
3. you will find the format of data in dataset is List.
### Expected behavior
I expected to receieve the format of data is torch.Tensor.
### Environment info
2.18.0 | 2024-04-06T11:52:39Z | 6,788 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-06T11:45:23Z | https://api.github.com/repos/huggingface/datasets/issues/6788/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6788/timeline | A Question About the Map Function | https://api.github.com/repos/huggingface/datasets/issues/6788/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/87431052?v=4",
"events_url": "https://api.github.com/users/ys-lan/events{/privacy}",
"followers_url": "https://api.github.com/users/ys-lan/followers",
"following_url": "https://api.github.com/users/ys-lan/following{/other_user}",
"gists_url": "https://api.github.com/users/ys-lan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ys-lan",
"id": 87431052,
"login": "ys-lan",
"node_id": "MDQ6VXNlcjg3NDMxMDUy",
"organizations_url": "https://api.github.com/users/ys-lan/orgs",
"received_events_url": "https://api.github.com/users/ys-lan/received_events",
"repos_url": "https://api.github.com/users/ys-lan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ys-lan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ys-lan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ys-lan"
} | [] | null | null | NONE | null | null | I_kwDODunzps6E3wHh | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6788/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6788 | https://github.com/huggingface/datasets/issues/6788 | false |
2,229,103,264 | https://api.github.com/repos/huggingface/datasets/issues/6787/labels{/name} | ### Describe the bug
```python
from datasets import Dataset
def worker(example):
while True:
continue
example['a'] = 100
return example
data = Dataset.from_list([{"a": 1}, {"a": 2}])
data = data.map(worker)
print(data[0])
```
I'm implementing a worker function whose runtime will depend on specific examples (e.g., while most examples take 0.01s in worker, several examples may take 50s).
Therefore, I would like to know how the current implementation will handle those subprocesses that require a long (e.g., >= 5min) or even infinite time.
I notice that the current implementation set a timeout of 0.05 second
https://github.com/huggingface/datasets/blob/c3ddb1ef00334a6f973679a51e783905fbc9ef0b/src/datasets/utils/py_utils.py#L674
However, this example code still gets stuck.
### Steps to reproduce the bug
run the example above
### Expected behavior
I want to set a default worker to handle these timeout cases, instead of getting stuck
### Environment info
main branch version | 2024-04-08T14:47:18Z | 6,787 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-06T06:25:39Z | https://api.github.com/repos/huggingface/datasets/issues/6787/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6787/timeline | TimeoutError in map | https://api.github.com/repos/huggingface/datasets/issues/6787/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/48146603?v=4",
"events_url": "https://api.github.com/users/Jiaxin-Wen/events{/privacy}",
"followers_url": "https://api.github.com/users/Jiaxin-Wen/followers",
"following_url": "https://api.github.com/users/Jiaxin-Wen/following{/other_user}",
"gists_url": "https://api.github.com/users/Jiaxin-Wen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Jiaxin-Wen",
"id": 48146603,
"login": "Jiaxin-Wen",
"node_id": "MDQ6VXNlcjQ4MTQ2NjAz",
"organizations_url": "https://api.github.com/users/Jiaxin-Wen/orgs",
"received_events_url": "https://api.github.com/users/Jiaxin-Wen/received_events",
"repos_url": "https://api.github.com/users/Jiaxin-Wen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Jiaxin-Wen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jiaxin-Wen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Jiaxin-Wen"
} | [] | null | null | CONTRIBUTOR | null | null | I_kwDODunzps6E3Wqg | [
"From my current understanding, this timeout is only used when we need to get the results.\r\n\r\nOne of:\r\n1. All tasks are done\r\n2. One worker died\r\n\r\nYour function should work fine and it's definitely a bug if it doesn't."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6787/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6787 | https://github.com/huggingface/datasets/issues/6787 | false |
2,228,463,776 | https://api.github.com/repos/huggingface/datasets/issues/6786/labels{/name} | PR for issue #6782.
Makes `cast_storage` of the `Image` class faster by removing the slow call to `.pylist`.
Instead directly convert each `ListArray` item to either `Array2DExtensionType` or `Array3DExtensionType`.
This also preserves the `dtype` removing the warning if the array is already `uint8`. | 2024-04-08T09:18:42Z | 6,786 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-04-05T17:00:46Z | https://api.github.com/repos/huggingface/datasets/issues/6786/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6786/timeline | Make Image cast storage faster | https://api.github.com/repos/huggingface/datasets/issues/6786/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Modexus",
"id": 37351874,
"login": "Modexus",
"node_id": "MDQ6VXNlcjM3MzUxODc0",
"organizations_url": "https://api.github.com/users/Modexus/orgs",
"received_events_url": "https://api.github.com/users/Modexus/received_events",
"repos_url": "https://api.github.com/users/Modexus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Modexus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Modexus"
} | [] | null | null | NONE | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6786.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6786",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6786.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6786"
} | PR_kwDODunzps5r3kWg | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6786). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6786/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6786 | https://github.com/huggingface/datasets/pull/6786 | true |
2,228,429,852 | https://api.github.com/repos/huggingface/datasets/issues/6785/labels{/name} | See https://github.com/huggingface/dataset-viewer/issues/2650
Tell me if it's OK, or if it's a breaking change that must be handled differently.
Also note that the docs page is still https://huggingface.co/docs/datasets-server/, so I didn't change it.
And the API URL is still https://datasets-server.huggingface.co/ (and [might always be](https://github.com/huggingface/dataset-viewer/issues/2666)), so I let it too. | 2024-04-08T12:41:13Z | 6,785 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-04-05T16:37:05Z | https://api.github.com/repos/huggingface/datasets/issues/6785/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6785/timeline | rename datasets-server to dataset-viewer | https://api.github.com/repos/huggingface/datasets/issues/6785/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [] | null | null | CONTRIBUTOR | 2024-04-08T12:35:02Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6785.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6785",
"merged_at": "2024-04-08T12:35:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6785.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6785"
} | PR_kwDODunzps5r3dCw | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6785). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005224 / 0.011353 (-0.006129) | 0.003938 / 0.011008 (-0.007070) | 0.063829 / 0.038508 (0.025321) | 0.030975 / 0.023109 (0.007865) | 0.265090 / 0.275898 (-0.010808) | 0.290994 / 0.323480 (-0.032486) | 0.003083 / 0.007986 (-0.004902) | 0.002810 / 0.004328 (-0.001518) | 0.048860 / 0.004250 (0.044609) | 0.044663 / 0.037052 (0.007611) | 0.272161 / 0.258489 (0.013672) | 0.306966 / 0.293841 (0.013125) | 0.028028 / 0.128546 (-0.100518) | 0.010616 / 0.075646 (-0.065031) | 0.211649 / 0.419271 (-0.207623) | 0.035906 / 0.043533 (-0.007626) | 0.251779 / 0.255139 (-0.003360) | 0.275543 / 0.283200 (-0.007657) | 0.017710 / 0.141683 (-0.123973) | 1.127015 / 1.452155 (-0.325139) | 1.173319 / 1.492716 (-0.319397) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090625 / 0.018006 (0.072619) | 0.301973 / 0.000490 (0.301483) | 0.000217 / 0.000200 (0.000017) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018868 / 0.037411 (-0.018543) | 0.062402 / 0.014526 (0.047876) | 0.074053 / 0.176557 (-0.102504) | 0.121484 / 0.737135 (-0.615652) | 0.078674 / 0.296338 (-0.217664) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277821 / 0.215209 (0.062612) | 2.761642 / 2.077655 (0.683987) | 1.452735 / 1.504120 (-0.051385) | 1.336303 / 1.541195 (-0.204891) | 1.343045 / 1.468490 (-0.125445) | 0.560917 / 4.584777 (-4.023860) | 2.353427 / 3.745712 (-1.392286) | 2.699067 / 5.269862 (-2.570795) | 1.704752 / 4.565676 (-2.860925) | 0.062668 / 0.424275 (-0.361607) | 0.005120 / 0.007607 (-0.002487) | 0.330455 / 0.226044 (0.104410) | 3.264604 / 2.268929 (0.995675) | 1.791940 / 55.444624 (-53.652685) | 1.526083 / 6.876477 (-5.350394) | 1.541429 / 2.142072 (-0.600643) | 0.630343 / 4.805227 (-4.174884) | 0.115189 / 6.500664 (-6.385475) | 0.041716 / 0.075469 (-0.033753) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975008 / 1.841788 (-0.866779) | 11.326924 / 8.074308 (3.252616) | 9.810300 / 10.191392 (-0.381092) | 0.141068 / 0.680424 (-0.539356) | 0.013950 / 0.534201 (-0.520251) | 0.285691 / 0.579283 (-0.293592) | 0.257968 / 0.434364 (-0.176396) | 0.322976 / 0.540337 (-0.217361) | 0.411114 / 1.386936 (-0.975822) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005176 / 0.011353 (-0.006177) | 0.003631 / 0.011008 (-0.007377) | 0.050006 / 0.038508 (0.011498) | 0.030622 / 0.023109 (0.007513) | 0.277364 / 0.275898 (0.001466) | 0.299752 / 0.323480 (-0.023728) | 0.004110 / 0.007986 (-0.003876) | 0.002694 / 0.004328 (-0.001634) | 0.048966 / 0.004250 (0.044715) | 0.039634 / 0.037052 (0.002582) | 0.289959 / 0.258489 (0.031470) | 0.320689 / 0.293841 (0.026848) | 0.029285 / 0.128546 (-0.099261) | 0.010435 / 0.075646 (-0.065211) | 0.057432 / 0.419271 (-0.361840) | 0.032554 / 0.043533 (-0.010979) | 0.277354 / 0.255139 (0.022215) | 0.296872 / 0.283200 (0.013673) | 0.017338 / 0.141683 (-0.124344) | 1.134174 / 1.452155 (-0.317981) | 1.184695 / 1.492716 (-0.308021) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089953 / 0.018006 (0.071947) | 0.299372 / 0.000490 (0.298882) | 0.000212 / 0.000200 (0.000012) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021349 / 0.037411 (-0.016062) | 0.075167 / 0.014526 (0.060641) | 0.085910 / 0.176557 (-0.090647) | 0.124729 / 0.737135 (-0.612406) | 0.088313 / 0.296338 (-0.208025) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291939 / 0.215209 (0.076730) | 2.851077 / 2.077655 (0.773423) | 1.609382 / 1.504120 (0.105262) | 1.469656 / 1.541195 (-0.071539) | 1.490469 / 1.468490 (0.021979) | 0.570421 / 4.584777 (-4.014356) | 2.441438 / 3.745712 (-1.304274) | 2.756514 / 5.269862 (-2.513347) | 1.714202 / 4.565676 (-2.851474) | 0.063656 / 0.424275 (-0.360619) | 0.005640 / 0.007607 (-0.001967) | 0.336240 / 0.226044 (0.110196) | 3.355434 / 2.268929 (1.086505) | 1.947553 / 55.444624 (-53.497072) | 1.672776 / 6.876477 (-5.203700) | 1.685316 / 2.142072 (-0.456757) | 0.638849 / 4.805227 (-4.166378) | 0.116304 / 6.500664 (-6.384360) | 0.041588 / 0.075469 (-0.033881) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.026700 / 1.841788 (-0.815088) | 12.044628 / 8.074308 (3.970319) | 10.464007 / 10.191392 (0.272615) | 0.156169 / 0.680424 (-0.524255) | 0.015624 / 0.534201 (-0.518577) | 0.287233 / 0.579283 (-0.292050) | 0.270374 / 0.434364 (-0.163990) | 0.325255 / 0.540337 (-0.215083) | 0.412021 / 1.386936 (-0.974915) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6f7f1718e3db54d7923ebe4383301fdd380c18b9 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6785/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6785 | https://github.com/huggingface/datasets/pull/6785 | true |
2,228,390,504 | https://api.github.com/repos/huggingface/datasets/issues/6784/labels{/name} | Instead of waiting for data files to be extracted in the packaged builders, we can prepend the compression prefix and extract them as they are being read (using `fsspec`). This saves disk space (deleting extracted archives is not set by default) and slightly speeds up dataset generation (less disk reads) | 2024-04-08T23:33:24Z | 6,784 | null | https://api.github.com/repos/huggingface/datasets | true | [] | 2024-04-05T16:12:25Z | https://api.github.com/repos/huggingface/datasets/issues/6784/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6784/timeline | Extract data on the fly in packaged builders | https://api.github.com/repos/huggingface/datasets/issues/6784/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6784.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6784",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6784.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6784"
} | PR_kwDODunzps5r3UTj | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6784). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6784/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6784 | https://github.com/huggingface/datasets/pull/6784 | true |
2,228,179,466 | https://api.github.com/repos/huggingface/datasets/issues/6783/labels{/name} | ### Describe the bug
# problem
I can't resample audio dataset in Kaggle Notebook. It looks like some code in `datasets` library use aliases that were deprecated in NumPy 1.20.
## code for resampling
```
from datasets import load_dataset, Audio
from transformers import AutoFeatureExtractor
from transformers import AutoModelForAudioClassification, TrainingArguments, Trainer
minds = load_dataset("PolyAI/minds14", name="en-US", split="train")
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
def preprocess_function(examples):
audio_arrays = [x["array"] for x in examples["audio"]]
inputs = feature_extractor(
audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=16000, truncation=True
)
return inputs
dataset = dataset.map(preprocess_function, remove_columns="audio", batched=True, batch_size=100)
```
## the error I got
<details>
<summary>Click to expand</summary>
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[20], line 1
----> 1 dataset = dataset.map(preprocess_function, remove_columns="audio", batched=True, batch_size=100)
2 dataset
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1955, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1952 disable_tqdm = not logging.is_progress_bar_enabled()
1954 if num_proc is None or num_proc == 1:
-> 1955 return self._map_single(
1956 function=function,
1957 with_indices=with_indices,
1958 with_rank=with_rank,
1959 input_columns=input_columns,
1960 batched=batched,
1961 batch_size=batch_size,
1962 drop_last_batch=drop_last_batch,
1963 remove_columns=remove_columns,
1964 keep_in_memory=keep_in_memory,
1965 load_from_cache_file=load_from_cache_file,
1966 cache_file_name=cache_file_name,
1967 writer_batch_size=writer_batch_size,
1968 features=features,
1969 disable_nullable=disable_nullable,
1970 fn_kwargs=fn_kwargs,
1971 new_fingerprint=new_fingerprint,
1972 disable_tqdm=disable_tqdm,
1973 desc=desc,
1974 )
1975 else:
1977 def format_cache_file_name(cache_file_name, rank):
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:520, in transmit_tasks.<locals>.wrapper(*args, **kwargs)
518 self: "Dataset" = kwargs.pop("self")
519 # apply actual function
--> 520 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
521 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
522 for dataset in datasets:
523 # Remove task templates if a column mapping of the template is no longer valid
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:487, in transmit_format.<locals>.wrapper(*args, **kwargs)
480 self_format = {
481 "type": self._format_type,
482 "format_kwargs": self._format_kwargs,
483 "columns": self._format_columns,
484 "output_all_columns": self._output_all_columns,
485 }
486 # apply actual function
--> 487 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
488 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
489 # re-apply format to the output
File /opt/conda/lib/python3.10/site-packages/datasets/fingerprint.py:458, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
452 kwargs[fingerprint_name] = update_fingerprint(
453 self._fingerprint, transform, kwargs_for_fingerprint
454 )
456 # Call actual function
--> 458 out = func(self, *args, **kwargs)
460 # Update fingerprint of in-place transforms + update in-place history of transforms
462 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:2356, in Dataset._map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2354 writer.write_table(batch)
2355 else:
-> 2356 writer.write_batch(batch)
2357 if update_data and writer is not None:
2358 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py:507, in ArrowWriter.write_batch(self, batch_examples, writer_batch_size)
505 col_try_type = try_features[col] if try_features is not None and col in try_features else None
506 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col)
--> 507 arrays.append(pa.array(typed_sequence))
508 inferred_features[col] = typed_sequence.get_inferred_type()
509 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema
File /opt/conda/lib/python3.10/site-packages/pyarrow/array.pxi:236, in pyarrow.lib.array()
File /opt/conda/lib/python3.10/site-packages/pyarrow/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol()
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py:184, in TypedSequence.__arrow_array__(self, type)
182 out = numpy_to_pyarrow_listarray(data)
183 elif isinstance(data, list) and data and isinstance(first_non_null_value(data)[1], np.ndarray):
--> 184 out = list_of_np_array_to_pyarrow_listarray(data)
185 else:
186 trying_cast_to_python_objects = True
File /opt/conda/lib/python3.10/site-packages/datasets/features/features.py:1174, in list_of_np_array_to_pyarrow_listarray(l_arr, type)
1172 """Build a PyArrow ListArray from a possibly nested list of NumPy arrays"""
1173 if len(l_arr) > 0:
-> 1174 return list_of_pa_arrays_to_pyarrow_listarray(
1175 [numpy_to_pyarrow_listarray(arr, type=type) if arr is not None else None for arr in l_arr]
1176 )
1177 else:
1178 return pa.array([], type=type)
File /opt/conda/lib/python3.10/site-packages/datasets/features/features.py:1163, in list_of_pa_arrays_to_pyarrow_listarray(l_arr)
1160 null_indices = [i for i, arr in enumerate(l_arr) if arr is None]
1161 l_arr = [arr for arr in l_arr if arr is not None]
1162 offsets = np.cumsum(
-> 1163 [0] + [len(arr) for arr in l_arr], dtype=np.object
1164 ) # convert to dtype object to allow None insertion
1165 offsets = np.insert(offsets, null_indices, None)
1166 offsets = pa.array(offsets, type=pa.int32())
File /opt/conda/lib/python3.10/site-packages/numpy/__init__.py:324, in __getattr__(attr)
319 warnings.warn(
320 f"In the future `np.{attr}` will be defined as the "
321 "corresponding NumPy scalar.", FutureWarning, stacklevel=2)
323 if attr in __former_attrs__:
--> 324 raise AttributeError(__former_attrs__[attr])
326 if attr == 'testing':
327 import numpy.testing as testing
AttributeError: module 'numpy' has no attribute 'object'.
`np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
```
</details>
### Steps to reproduce the bug
Run above code in Kaggle Notebook.
### Expected behavior
I can resample audio data without fail.
### Environment info
- `datasets` version: 2.1.0
- Platform: Linux-5.15.133+-x86_64-with-glibc2.31
- Python version: 3.10.13
- PyArrow version: 11.0.0
- Pandas version: 2.2.1 | 2024-04-08T16:11:01Z | 6,783 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-05T14:31:48Z | https://api.github.com/repos/huggingface/datasets/issues/6783/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6783/timeline | AttributeError: module 'numpy' has no attribute 'object'. in Kaggle Notebook | https://api.github.com/repos/huggingface/datasets/issues/6783/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/26062262?v=4",
"events_url": "https://api.github.com/users/petrov826/events{/privacy}",
"followers_url": "https://api.github.com/users/petrov826/followers",
"following_url": "https://api.github.com/users/petrov826/following{/other_user}",
"gists_url": "https://api.github.com/users/petrov826/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/petrov826",
"id": 26062262,
"login": "petrov826",
"node_id": "MDQ6VXNlcjI2MDYyMjYy",
"organizations_url": "https://api.github.com/users/petrov826/orgs",
"received_events_url": "https://api.github.com/users/petrov826/received_events",
"repos_url": "https://api.github.com/users/petrov826/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/petrov826/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/petrov826/subscriptions",
"type": "User",
"url": "https://api.github.com/users/petrov826"
} | [] | null | null | NONE | null | null | I_kwDODunzps6Ez1IK | [
"Hi! You can fix this by updating the `datasets` package with `pip install -U datasets` and restarting the notebook.\r\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6783/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6783 | https://github.com/huggingface/datasets/issues/6783 | false |
2,228,081,955 | https://api.github.com/repos/huggingface/datasets/issues/6782/labels{/name} | ### Describe the bug
Operations that save an image from a path into parquet are very slow.
I believe the reason for this is that the image data (`numpy`) is converted into `pyarrow` format but then back to python using `.pylist()` before being converted to a numpy array again.
`pylist` is already slow but used on a multi-dimensional numpy array such as an image it takes a very long time.
From the trace below we can see that `__arrow_array__` takes a long time.
It is currently also called in `get_inferred_type`, this should be removable #6781 but doesn't change the underyling issue.
The conversion to `pyarrow` and back also leads to the `numpy` array having type `int64` which causes a warning message because the image type excepts `uint8`.
However, originally the `numpy` image array was in `uint8`.
### Steps to reproduce the bug
```python
from PIL import Image
import numpy as np
import datasets
import cProfile
image = Image.fromarray(np.random.randint(0, 255, (2048, 2048, 3), dtype=np.uint8))
image.save("test_image.jpg")
ds = datasets.Dataset.from_dict(
{"image": ["test_image.jpg"]},
features=datasets.Features({"image": datasets.Image(decode=True)}),
)
# load as numpy array, e.g. for further processing with map
# same result as map returning numpy arrays
ds.set_format("numpy")
cProfile.run("ds.map(writer_batch_size=1, load_from_cache_file=False)", "restats")
```
```bash
Fri Apr 5 14:56:17 2024 restats
66817 function calls (64992 primitive calls) in 33.382 seconds
Ordered by: cumulative time
List reduced from 1073 to 20 due to restriction <20>
ncalls tottime percall cumtime percall filename:lineno(function)
46/1 0.000 0.000 33.382 33.382 {built-in method builtins.exec}
1 0.000 0.000 33.382 33.382 <string>:1(<module>)
1 0.000 0.000 33.382 33.382 arrow_dataset.py:594(wrapper)
1 0.000 0.000 33.382 33.382 arrow_dataset.py:551(wrapper)
1 0.000 0.000 33.379 33.379 arrow_dataset.py:2916(map)
4 0.000 0.000 33.327 8.332 arrow_dataset.py:3277(_map_single)
1 0.000 0.000 33.311 33.311 arrow_writer.py:465(write)
2 0.000 0.000 33.311 16.656 arrow_writer.py:423(write_examples_on_file)
1 0.000 0.000 33.311 33.311 arrow_writer.py:527(write_batch)
2 14.484 7.242 33.260 16.630 arrow_writer.py:161(__arrow_array__)
1 0.001 0.001 16.438 16.438 arrow_writer.py:121(get_inferred_type)
1 0.000 0.000 14.398 14.398 threading.py:637(wait)
1 0.000 0.000 14.398 14.398 threading.py:323(wait)
8 14.398 1.800 14.398 1.800 {method 'acquire' of '_thread.lock' objects}
4/2 0.000 0.000 4.337 2.169 table.py:1800(wrapper)
2 0.000 0.000 4.337 2.169 table.py:1950(cast_array_to_feature)
2 0.475 0.238 4.337 2.169 image.py:209(cast_storage)
9 2.583 0.287 2.583 0.287 {built-in method numpy.array}
2 0.000 0.000 1.284 0.642 image.py:319(encode_np_array)
2 0.000 0.000 1.246 0.623 image.py:301(image_to_bytes)
```
### Expected behavior
The `numpy` image data should be passed through as it will be directly consumed by `pillow` to convert it to bytes.
As an example one can replace `list_of_np_array_to_pyarrow_listarray(data)` in `__arrow_array__` with just `out = data` as a test.
We have to change `cast_storage` of the `Image` feature so it handles the passed through data (& if to handle type before)
```python
bytes_array = pa.array(
[encode_np_array(arr)["bytes"] if arr is not None else None for arr in storage],
type=pa.binary(),
)
```
Leading to the following:
```bash
Fri Apr 5 15:44:27 2024 restats
66419 function calls (64595 primitive calls) in 0.937 seconds
Ordered by: cumulative time
List reduced from 1023 to 20 due to restriction <20>
ncalls tottime percall cumtime percall filename:lineno(function)
47/1 0.000 0.000 0.935 0.935 {built-in method builtins.exec}
2/1 0.000 0.000 0.935 0.935 <string>:1(<module>)
2/1 0.000 0.000 0.934 0.934 arrow_dataset.py:594(wrapper)
2/1 0.000 0.000 0.934 0.934 arrow_dataset.py:551(wrapper)
2/1 0.000 0.000 0.934 0.934 arrow_dataset.py:2916(map)
4 0.000 0.000 0.933 0.233 arrow_dataset.py:3277(_map_single)
1 0.000 0.000 0.883 0.883 arrow_writer.py:466(write)
2 0.000 0.000 0.883 0.441 arrow_writer.py:424(write_examples_on_file)
1 0.000 0.000 0.882 0.882 arrow_writer.py:528(write_batch)
2 0.000 0.000 0.877 0.439 arrow_writer.py:161(__arrow_array__)
4/2 0.000 0.000 0.877 0.439 table.py:1800(wrapper)
2 0.000 0.000 0.877 0.439 table.py:1950(cast_array_to_feature)
2 0.009 0.005 0.877 0.439 image.py:209(cast_storage)
2 0.000 0.000 0.868 0.434 image.py:335(encode_np_array)
2 0.000 0.000 0.856 0.428 image.py:317(image_to_bytes)
2 0.000 0.000 0.822 0.411 Image.py:2376(save)
2 0.000 0.000 0.822 0.411 PngImagePlugin.py:1233(_save)
2 0.000 0.000 0.822 0.411 ImageFile.py:517(_save)
2 0.000 0.000 0.821 0.411 ImageFile.py:545(_encode_tile)
589 0.803 0.001 0.803 0.001 {method 'encode' of 'ImagingEncoder' objects}
```
This is of course only a test as it passes through all `numpy` arrays irrespective of if they should be an image.
Also I guess `cast_storage` is meant for casting `pyarrow` storage exclusively.
Converting to `pyarrow` array seems like a good solution as it also handles `pytorch` tensors etc., maybe there is a more efficient way to create a PIL image from a `pyarrow` array?
Not sure how this should be handled but I would be happy to help if there is a good solution.
### Environment info
- `datasets` version: 2.18.1.dev0
- Platform: Linux-6.7.11-200.fc39.x86_64-x86_64-with-glibc2.38
- Python version: 3.12.2
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2024.3.1 | 2024-04-05T21:04:43Z | 6,782 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-05T13:46:54Z | https://api.github.com/repos/huggingface/datasets/issues/6782/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6782/timeline | Map/Saving Image from external filepath extremely slow | https://api.github.com/repos/huggingface/datasets/issues/6782/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Modexus",
"id": 37351874,
"login": "Modexus",
"node_id": "MDQ6VXNlcjM3MzUxODc0",
"organizations_url": "https://api.github.com/users/Modexus/orgs",
"received_events_url": "https://api.github.com/users/Modexus/received_events",
"repos_url": "https://api.github.com/users/Modexus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Modexus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Modexus"
} | [] | null | null | NONE | null | null | I_kwDODunzps6EzdUj | [
"This may be a solution that only changes `cast_storage` of `Image`.\r\nHowever, I'm not totally sure that the assumptions hold that are made about the `ListArray`.\r\n\r\n```python\r\nelif pa.types.is_list(storage.type):\r\n from .features import Array3DExtensionType\r\n\r\n def get_shapes(arr):\r\n shape = ()\r\n while isinstance(arr, pa.ListArray):\r\n len_curr = len(arr)\r\n arr = arr.flatten()\r\n len_new = len(arr)\r\n shape = shape + (len_new // len_curr,)\r\n return shape\r\n\r\n def get_dtypes(arr):\r\n dtype = storage.type\r\n while hasattr(dtype, \"value_type\"):\r\n dtype = dtype.value_type\r\n return dtype\r\n\r\n arrays = []\r\n for i, is_null in enumerate(storage.is_null()):\r\n if not is_null.as_py():\r\n storage_part = storage.take([i])\r\n shape = get_shapes(storage_part)\r\n dtype = get_dtypes(storage_part)\r\n\r\n extension_type = Array3DExtensionType(shape=shape, dtype=str(dtype))\r\n array = pa.ExtensionArray.from_storage(extension_type, storage_part)\r\n arrays.append(array.to_numpy().squeeze(0))\r\n else:\r\n arrays.append(None)\r\n\r\n bytes_array = pa.array(\r\n [encode_np_array(arr)[\"bytes\"] if arr is not None else None for arr in arrays],\r\n type=pa.binary(),\r\n )\r\n path_array = pa.array([None] * len(storage), type=pa.string())\r\n storage = pa.StructArray.from_arrays(\r\n [bytes_array, path_array], [\"bytes\", \"path\"], mask=bytes_array.is_null()\r\n )\r\n```\r\n(Edited): to handle nulls\r\n\r\nNotably this doesn't change anything about the passing through of data or other things, just in the `Image` class.\r\nSeems quite fast:\r\n```bash\r\nFri Apr 5 17:55:51 2024 restats\r\n\r\n 63818 function calls (61995 primitive calls) in 0.812 seconds\r\n\r\n Ordered by: cumulative time\r\n List reduced from 1051 to 20 due to restriction <20>\r\n\r\n ncalls tottime percall cumtime percall filename:lineno(function)\r\n 47/1 0.000 0.000 0.810 0.810 {built-in method builtins.exec}\r\n 2/1 0.000 0.000 0.810 0.810 <string>:1(<module>)\r\n 2/1 0.000 0.000 0.809 0.809 arrow_dataset.py:594(wrapper)\r\n 2/1 0.000 0.000 0.809 0.809 arrow_dataset.py:551(wrapper)\r\n 2/1 0.000 0.000 0.809 0.809 arrow_dataset.py:2916(map)\r\n 3 0.000 0.000 0.807 0.269 arrow_dataset.py:3277(_map_single)\r\n 1 0.000 0.000 0.760 0.760 arrow_writer.py:589(finalize)\r\n 1 0.000 0.000 0.760 0.760 arrow_writer.py:423(write_examples_on_file)\r\n 1 0.000 0.000 0.759 0.759 arrow_writer.py:527(write_batch)\r\n 1 0.001 0.001 0.754 0.754 arrow_writer.py:161(__arrow_array__)\r\n 2/1 0.000 0.000 0.719 0.719 table.py:1800(wrapper)\r\n 1 0.000 0.000 0.719 0.719 table.py:1950(cast_array_to_feature)\r\n 1 0.006 0.006 0.718 0.718 image.py:209(cast_storage)\r\n 1 0.000 0.000 0.451 0.451 image.py:361(encode_np_array)\r\n 1 0.000 0.000 0.444 0.444 image.py:343(image_to_bytes)\r\n 1 0.000 0.000 0.413 0.413 Image.py:2376(save)\r\n 1 0.000 0.000 0.413 0.413 PngImagePlugin.py:1233(_save)\r\n 1 0.000 0.000 0.413 0.413 ImageFile.py:517(_save)\r\n 1 0.000 0.000 0.413 0.413 ImageFile.py:545(_encode_tile)\r\n 397 0.409 0.001 0.409 0.001 {method 'encode' of 'ImagingEncoder' objects}\r\n```",
"Also encounter this problem. Has been strugging with it for a long time..."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6782/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6782 | https://github.com/huggingface/datasets/issues/6782 | false |
2,228,026,497 | https://api.github.com/repos/huggingface/datasets/issues/6781/labels{/name} | Inferring the type seems to be unnecessary given that the pyarrow array has already been created.
Because pyarrow array creation is sometimes extremely slow this doubles the time write_batch takes. | 2024-04-09T07:49:11Z | 6,781 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-04-05T13:21:05Z | https://api.github.com/repos/huggingface/datasets/issues/6781/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6781/timeline | Remove get_inferred_type from ArrowWriter write_batch | https://api.github.com/repos/huggingface/datasets/issues/6781/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Modexus",
"id": 37351874,
"login": "Modexus",
"node_id": "MDQ6VXNlcjM3MzUxODc0",
"organizations_url": "https://api.github.com/users/Modexus/orgs",
"received_events_url": "https://api.github.com/users/Modexus/received_events",
"repos_url": "https://api.github.com/users/Modexus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Modexus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Modexus"
} | [] | null | null | NONE | 2024-04-09T07:49:11Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6781.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6781",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6781.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6781"
} | PR_kwDODunzps5r2DMe | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6781). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Close in favor of #6786."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6781/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6781 | https://github.com/huggingface/datasets/pull/6781 | true |
2,226,160,096 | https://api.github.com/repos/huggingface/datasets/issues/6780/labels{/name} | Updates the `wmt_t2t` test to pin the `revision` to the version with a loading script (cc @albertvillanova).
Additionally, it replaces the occurrences of the `lhoestq/test` repo id with `hf-internal-testing/dataset_with_script` and re-enables logging checks in the `Dataset.from_sql` tests. | 2024-04-04T18:46:04Z | 6,780 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-04-04T17:45:04Z | https://api.github.com/repos/huggingface/datasets/issues/6780/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6780/timeline | Fix CI | https://api.github.com/repos/huggingface/datasets/issues/6780/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2024-04-04T18:23:34Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6780.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6780",
"merged_at": "2024-04-04T18:23:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6780.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6780"
} | PR_kwDODunzps5rvkyj | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6780). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005074 / 0.011353 (-0.006279) | 0.003395 / 0.011008 (-0.007614) | 0.062358 / 0.038508 (0.023849) | 0.031041 / 0.023109 (0.007932) | 0.244039 / 0.275898 (-0.031859) | 0.266361 / 0.323480 (-0.057119) | 0.003201 / 0.007986 (-0.004785) | 0.002609 / 0.004328 (-0.001719) | 0.049269 / 0.004250 (0.045018) | 0.045713 / 0.037052 (0.008661) | 0.264075 / 0.258489 (0.005586) | 0.295428 / 0.293841 (0.001587) | 0.027882 / 0.128546 (-0.100664) | 0.010424 / 0.075646 (-0.065222) | 0.208417 / 0.419271 (-0.210854) | 0.035728 / 0.043533 (-0.007805) | 0.246803 / 0.255139 (-0.008336) | 0.267169 / 0.283200 (-0.016031) | 0.019797 / 0.141683 (-0.121885) | 1.163299 / 1.452155 (-0.288856) | 1.196118 / 1.492716 (-0.296599) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.106091 / 0.018006 (0.088085) | 0.303970 / 0.000490 (0.303480) | 0.000219 / 0.000200 (0.000019) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017955 / 0.037411 (-0.019456) | 0.060539 / 0.014526 (0.046013) | 0.072884 / 0.176557 (-0.103673) | 0.119205 / 0.737135 (-0.617931) | 0.074072 / 0.296338 (-0.222266) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.272676 / 0.215209 (0.057467) | 2.715169 / 2.077655 (0.637514) | 1.419090 / 1.504120 (-0.085030) | 1.303903 / 1.541195 (-0.237292) | 1.311903 / 1.468490 (-0.156587) | 0.562005 / 4.584777 (-4.022772) | 2.432817 / 3.745712 (-1.312896) | 2.770599 / 5.269862 (-2.499263) | 1.723043 / 4.565676 (-2.842633) | 0.064341 / 0.424275 (-0.359934) | 0.004923 / 0.007607 (-0.002684) | 0.330507 / 0.226044 (0.104463) | 3.240829 / 2.268929 (0.971901) | 1.787638 / 55.444624 (-53.656986) | 1.522971 / 6.876477 (-5.353506) | 1.529496 / 2.142072 (-0.612576) | 0.645768 / 4.805227 (-4.159459) | 0.116405 / 6.500664 (-6.384259) | 0.041524 / 0.075469 (-0.033945) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.968515 / 1.841788 (-0.873272) | 11.628911 / 8.074308 (3.554603) | 9.495023 / 10.191392 (-0.696369) | 0.142219 / 0.680424 (-0.538204) | 0.013859 / 0.534201 (-0.520342) | 0.285727 / 0.579283 (-0.293556) | 0.276842 / 0.434364 (-0.157522) | 0.321247 / 0.540337 (-0.219090) | 0.409958 / 1.386936 (-0.976978) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005102 / 0.011353 (-0.006251) | 0.003213 / 0.011008 (-0.007796) | 0.049250 / 0.038508 (0.010742) | 0.030649 / 0.023109 (0.007540) | 0.276629 / 0.275898 (0.000731) | 0.297315 / 0.323480 (-0.026165) | 0.004198 / 0.007986 (-0.003787) | 0.002744 / 0.004328 (-0.001585) | 0.047899 / 0.004250 (0.043649) | 0.040596 / 0.037052 (0.003544) | 0.287248 / 0.258489 (0.028759) | 0.313573 / 0.293841 (0.019732) | 0.029067 / 0.128546 (-0.099480) | 0.010122 / 0.075646 (-0.065524) | 0.058869 / 0.419271 (-0.360402) | 0.033012 / 0.043533 (-0.010521) | 0.272995 / 0.255139 (0.017856) | 0.297102 / 0.283200 (0.013903) | 0.018209 / 0.141683 (-0.123474) | 1.157785 / 1.452155 (-0.294369) | 1.184999 / 1.492716 (-0.307717) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094228 / 0.018006 (0.076221) | 0.302055 / 0.000490 (0.301565) | 0.000221 / 0.000200 (0.000021) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022020 / 0.037411 (-0.015391) | 0.074970 / 0.014526 (0.060444) | 0.087682 / 0.176557 (-0.088875) | 0.126506 / 0.737135 (-0.610629) | 0.092046 / 0.296338 (-0.204293) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295634 / 0.215209 (0.080425) | 2.891554 / 2.077655 (0.813899) | 1.579963 / 1.504120 (0.075843) | 1.462924 / 1.541195 (-0.078271) | 1.463806 / 1.468490 (-0.004684) | 0.558371 / 4.584777 (-4.026406) | 2.513500 / 3.745712 (-1.232212) | 2.754146 / 5.269862 (-2.515716) | 1.762317 / 4.565676 (-2.803360) | 0.063965 / 0.424275 (-0.360310) | 0.005538 / 0.007607 (-0.002069) | 0.348114 / 0.226044 (0.122070) | 3.484558 / 2.268929 (1.215630) | 1.940002 / 55.444624 (-53.504623) | 1.658469 / 6.876477 (-5.218008) | 1.645777 / 2.142072 (-0.496295) | 0.639367 / 4.805227 (-4.165861) | 0.115605 / 6.500664 (-6.385059) | 0.040647 / 0.075469 (-0.034822) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.036002 / 1.841788 (-0.805786) | 12.286895 / 8.074308 (4.212587) | 10.146719 / 10.191392 (-0.044673) | 0.140867 / 0.680424 (-0.539557) | 0.015517 / 0.534201 (-0.518684) | 0.290126 / 0.579283 (-0.289157) | 0.298702 / 0.434364 (-0.135662) | 0.325518 / 0.540337 (-0.214819) | 0.412597 / 1.386936 (-0.974339) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c3ddb1ef00334a6f973679a51e783905fbc9ef0b \"CML watermark\")\n"
] | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6780/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6780 | https://github.com/huggingface/datasets/pull/6780 | true |
2,226,075,551 | https://api.github.com/repos/huggingface/datasets/issues/6779/labels{/name} | `diffusers` (https://github.com/huggingface/diffusers/pull/7116) and `huggingface_hub` (https://github.com/huggingface/huggingface_hub/pull/2072) also use `uv` to install their dependencies, so we can do the same here.
It seems to make the "Install dependencies" step in the `ubuntu` jobs 5-8x faster and 1.5-2x in the `windows` one.
Besides introducing `uv` in CI, this PR bumps the `tensorflow` minimal version requirement to align with Transformers and simplifies the SpaCy hashing tests (use blank language models instead of the pre-trained ones)
| 2024-04-08T13:34:01Z | 6,779 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-04-04T17:02:51Z | https://api.github.com/repos/huggingface/datasets/issues/6779/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6779/timeline | Install dependencies with `uv` in CI | https://api.github.com/repos/huggingface/datasets/issues/6779/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2024-04-08T13:27:44Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6779.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6779",
"merged_at": "2024-04-08T13:27:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6779.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6779"
} | PR_kwDODunzps5rvSA8 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6779). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005336 / 0.011353 (-0.006017) | 0.004052 / 0.011008 (-0.006956) | 0.063475 / 0.038508 (0.024967) | 0.032963 / 0.023109 (0.009854) | 0.243906 / 0.275898 (-0.031992) | 0.269048 / 0.323480 (-0.054432) | 0.003363 / 0.007986 (-0.004622) | 0.002802 / 0.004328 (-0.001527) | 0.049487 / 0.004250 (0.045236) | 0.046990 / 0.037052 (0.009938) | 0.260169 / 0.258489 (0.001680) | 0.289145 / 0.293841 (-0.004696) | 0.028030 / 0.128546 (-0.100517) | 0.010706 / 0.075646 (-0.064940) | 0.213640 / 0.419271 (-0.205632) | 0.035866 / 0.043533 (-0.007667) | 0.245106 / 0.255139 (-0.010033) | 0.269588 / 0.283200 (-0.013612) | 0.019791 / 0.141683 (-0.121892) | 1.117684 / 1.452155 (-0.334470) | 1.183389 / 1.492716 (-0.309327) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095736 / 0.018006 (0.077730) | 0.302586 / 0.000490 (0.302097) | 0.000220 / 0.000200 (0.000020) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018985 / 0.037411 (-0.018426) | 0.062097 / 0.014526 (0.047571) | 0.075617 / 0.176557 (-0.100939) | 0.120570 / 0.737135 (-0.616566) | 0.075949 / 0.296338 (-0.220390) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279597 / 0.215209 (0.064388) | 2.754319 / 2.077655 (0.676665) | 1.444147 / 1.504120 (-0.059973) | 1.328414 / 1.541195 (-0.212781) | 1.371073 / 1.468490 (-0.097417) | 0.553851 / 4.584777 (-4.030926) | 2.351694 / 3.745712 (-1.394018) | 2.860771 / 5.269862 (-2.409091) | 1.749664 / 4.565676 (-2.816013) | 0.061736 / 0.424275 (-0.362539) | 0.005073 / 0.007607 (-0.002534) | 0.329974 / 0.226044 (0.103930) | 3.300487 / 2.268929 (1.031558) | 1.812809 / 55.444624 (-53.631815) | 1.559018 / 6.876477 (-5.317458) | 1.628664 / 2.142072 (-0.513408) | 0.635757 / 4.805227 (-4.169471) | 0.116468 / 6.500664 (-6.384196) | 0.042641 / 0.075469 (-0.032828) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972048 / 1.841788 (-0.869740) | 11.952721 / 8.074308 (3.878412) | 9.754274 / 10.191392 (-0.437118) | 0.132026 / 0.680424 (-0.548398) | 0.015352 / 0.534201 (-0.518849) | 0.290574 / 0.579283 (-0.288709) | 0.275384 / 0.434364 (-0.158980) | 0.330688 / 0.540337 (-0.209650) | 0.414868 / 1.386936 (-0.972068) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005412 / 0.011353 (-0.005941) | 0.003814 / 0.011008 (-0.007194) | 0.049988 / 0.038508 (0.011480) | 0.031617 / 0.023109 (0.008507) | 0.278975 / 0.275898 (0.003077) | 0.303540 / 0.323480 (-0.019940) | 0.004265 / 0.007986 (-0.003721) | 0.002804 / 0.004328 (-0.001525) | 0.049518 / 0.004250 (0.045268) | 0.041176 / 0.037052 (0.004123) | 0.291248 / 0.258489 (0.032759) | 0.317401 / 0.293841 (0.023560) | 0.029501 / 0.128546 (-0.099045) | 0.010392 / 0.075646 (-0.065255) | 0.057906 / 0.419271 (-0.361365) | 0.033056 / 0.043533 (-0.010477) | 0.280202 / 0.255139 (0.025063) | 0.298684 / 0.283200 (0.015484) | 0.018071 / 0.141683 (-0.123612) | 1.167691 / 1.452155 (-0.284464) | 1.211322 / 1.492716 (-0.281394) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092325 / 0.018006 (0.074318) | 0.301209 / 0.000490 (0.300719) | 0.000221 / 0.000200 (0.000021) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021432 / 0.037411 (-0.015980) | 0.074556 / 0.014526 (0.060031) | 0.086049 / 0.176557 (-0.090508) | 0.125151 / 0.737135 (-0.611984) | 0.088279 / 0.296338 (-0.208059) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296755 / 0.215209 (0.081546) | 2.922650 / 2.077655 (0.844995) | 1.606031 / 1.504120 (0.101911) | 1.489692 / 1.541195 (-0.051502) | 1.530206 / 1.468490 (0.061716) | 0.577827 / 4.584777 (-4.006950) | 2.459716 / 3.745712 (-1.285997) | 2.825192 / 5.269862 (-2.444669) | 1.788110 / 4.565676 (-2.777566) | 0.064011 / 0.424275 (-0.360264) | 0.005616 / 0.007607 (-0.001991) | 0.341612 / 0.226044 (0.115568) | 3.455123 / 2.268929 (1.186194) | 1.961635 / 55.444624 (-53.482990) | 1.688107 / 6.876477 (-5.188370) | 1.725490 / 2.142072 (-0.416583) | 0.656011 / 4.805227 (-4.149216) | 0.117633 / 6.500664 (-6.383031) | 0.041386 / 0.075469 (-0.034083) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.025786 / 1.841788 (-0.816002) | 12.294598 / 8.074308 (4.220290) | 10.241136 / 10.191392 (0.049744) | 0.130577 / 0.680424 (-0.549847) | 0.016094 / 0.534201 (-0.518107) | 0.291193 / 0.579283 (-0.288090) | 0.273016 / 0.434364 (-0.161348) | 0.327553 / 0.540337 (-0.212784) | 0.418556 / 1.386936 (-0.968380) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3575036af2fd5cccff7fa60de30e2e444cf8a54e \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6779/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6779 | https://github.com/huggingface/datasets/pull/6779 | true |
2,226,040,636 | https://api.github.com/repos/huggingface/datasets/issues/6778/labels{/name} | ### Describe the bug
The `to_csv()` method does not output commas in lists. So when the Dataset is loaded back in the data structure of the column with a list is not correct.
Here's an example:
Obviously, it's not as trivial as inserting commas in the list, since its a comma-separated file. But hopefully there's a way to export the list in a way that it'll be imported by `load_dataset()` correctly.
### Steps to reproduce the bug
Here's some code to reproduce the bug:
```python
from datasets import Dataset
ds = Dataset.from_dict(
{
"pokemon": ["bulbasaur", "squirtle"],
"type": ["grass", "water"]
}
)
def ascii_to_hex(text):
return [ord(c) for c in text]
ds = ds.map(lambda x: {"int": ascii_to_hex(x['pokemon'])})
ds.to_csv('../output/temp.csv')
```
temp.csv then contains:
```
### Expected behavior
ACTUAL OUTPUT:
```
pokemon,type,int
bulbasaur,grass,[ 98 117 108 98 97 115 97 117 114]
squirtle,water,[115 113 117 105 114 116 108 101]
```
EXPECTED OUTPUT:
```
pokemon,type,int
bulbasaur,grass,[98, 117, 108, 98, 97, 115, 97, 117, 114]
squirtle,water,[115, 113, 117, 105, 114, 116, 108, 101]
```
or probably something more like this since it's a CSV file:
```
pokemon,type,int
bulbasaur,grass,"[98, 117, 108, 98, 97, 115, 97, 117, 114]"
squirtle,water,"[115, 113, 117, 105, 114, 116, 108, 101]"
```
### Environment info
### Package Version
Name: datasets
Version: 2.16.1
### Python
version: 3.10.12
### OS Info
PRETTY_NAME="Ubuntu 22.04.4 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.4 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
...
UBUNTU_CODENAME=jammy | 2024-04-08T15:24:41Z | 6,778 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-04T16:46:13Z | https://api.github.com/repos/huggingface/datasets/issues/6778/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6778/timeline | Dataset.to_csv() missing commas in columns with lists | https://api.github.com/repos/huggingface/datasets/issues/6778/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/100041276?v=4",
"events_url": "https://api.github.com/users/mpickard-dataprof/events{/privacy}",
"followers_url": "https://api.github.com/users/mpickard-dataprof/followers",
"following_url": "https://api.github.com/users/mpickard-dataprof/following{/other_user}",
"gists_url": "https://api.github.com/users/mpickard-dataprof/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mpickard-dataprof",
"id": 100041276,
"login": "mpickard-dataprof",
"node_id": "U_kgDOBfaCPA",
"organizations_url": "https://api.github.com/users/mpickard-dataprof/orgs",
"received_events_url": "https://api.github.com/users/mpickard-dataprof/received_events",
"repos_url": "https://api.github.com/users/mpickard-dataprof/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mpickard-dataprof/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mpickard-dataprof/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mpickard-dataprof"
} | [] | null | null | NONE | null | null | I_kwDODunzps6Erq88 | [
"Hello!\r\n\r\nThis is due to how pandas write numpy arrays to csv. [Source](https://stackoverflow.com/questions/54753179/to-csv-saves-np-array-as-string-instead-of-as-a-list)\r\nTo fix this, you can convert them to list yourselves.\r\n\r\n```python\r\ndf = ds.to_pandas()\r\ndf['int'] = df['int'].apply(lambda arr: list(arr))\r\ndf.to_csv(index=False, '../output/temp.csv')\r\n```\r\n\r\nI think it would be good if `datasets` would do the conversion itself, but it's a breaking change and I would wait for the greenlight from someone from HF."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6778/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6778 | https://github.com/huggingface/datasets/issues/6778 | false |
2,224,611,247 | https://api.github.com/repos/huggingface/datasets/issues/6777/labels{/name} | ### Describe the bug
Hi I have the following directory structure:
|--dataset
| |-- images
| |-- metadata1000.csv
| |-- metadata1000.jsonl
| |-- padded_images
Example of metadata1000.jsonl file
{"caption": "a drawing depicts a full shot of a black t-shirt with a triangular pattern on the front there is a white label on the left side of the triangle", "image": "images/212734.png", "gaussian_padded_image": "padded_images/p_212734.png"}
{"caption": "an eye-level full shot of a large elephant and a baby elephant standing in a watering hole on the left side is a small elephant with its head turned to the right of dry land, trees, and bushes", "image": "images/212735.png", "gaussian_padded_image": "padded_images/p_212735.png"}
.
.
.
I'm trying to use dataset = load_dataset("imagefolder", data_dir='/dataset/', split='train') to load the the dataset, however it is not able to load according to the fields in the metadata1000.jsonl .
please assist to load the data properly
also getting
```
File "/workspace/train_trans_vae.py", line 1089, in <module>
print(get_metadata_patterns('/dataset/'))
File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 499, in get_metadata_patterns
raise FileNotFoundError(f"The directory at {base_path} doesn't contain any metadata file") from None
FileNotFoundError: The directory at /dataset/ doesn't contain any metadata file
```
when trying
```
from datasets.data_files import get_metadata_patterns
print(get_metadata_patterns('/dataset/'))
```
### Steps to reproduce the bug
dataset Version: 2.18.0
make a similar jsonl and similar directory format
### Expected behavior
creates a dataset object with the column names, caption,image,gaussian_padded_image
### Environment info
dataset Version: 2.18.0 | 2024-04-05T21:14:48Z | 6,777 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-04T06:31:53Z | https://api.github.com/repos/huggingface/datasets/issues/6777/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6777/timeline | .Jsonl metadata not detected | https://api.github.com/repos/huggingface/datasets/issues/6777/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/81643693?v=4",
"events_url": "https://api.github.com/users/nighting0le01/events{/privacy}",
"followers_url": "https://api.github.com/users/nighting0le01/followers",
"following_url": "https://api.github.com/users/nighting0le01/following{/other_user}",
"gists_url": "https://api.github.com/users/nighting0le01/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nighting0le01",
"id": 81643693,
"login": "nighting0le01",
"node_id": "MDQ6VXNlcjgxNjQzNjkz",
"organizations_url": "https://api.github.com/users/nighting0le01/orgs",
"received_events_url": "https://api.github.com/users/nighting0le01/received_events",
"repos_url": "https://api.github.com/users/nighting0le01/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nighting0le01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nighting0le01/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nighting0le01"
} | [] | null | null | NONE | null | null | I_kwDODunzps6EmN-v | [
"Hi! `metadata.jsonl` (or `metadata.csv`) is the only allowed name for the `imagefolder`'s metadata files.",
"@mariosasko hey i tried with metadata.jsonl also and it still doesn't get the right columns",
"@mariosasko it says metadata.csv not found\r\n<img width=\"1150\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/81643693/3754980c-6185-4413-88fa-b499bcdd4195\">\r\n\r\ndataset = load_dataset('/dataset',metadata.csv) \r\n\r\n| workspace\r\n|| source code\r\n| dataset\r\n| |-- images\r\n| |-- metadata.csv\r\n| |-- metadata.jsonl\r\n| |-- padded_images\r\n\r\nExample of metadata.jsonl file\r\n{\"caption\": \"a drawing depicts a full shot of a black t-shirt with a triangular pattern on the front there is a white label on the left side of the triangle\", \"image\": \"images/212734.png\", \"gaussian_padded_image\": \"padded_images/p_212734.png\"}\r\n{\"caption\": \"an eye-level full shot of a large elephant and a baby elephant standing in a watering hole on the left side is a small elephant with its head turned to the right of dry land, trees, and bushes\", \"image\": \"images/212735.png\", \"gaussian_padded_image\": \"padded_images/p_212735.png\"}\r\n",
"Loading more than one image per row with `imagefolder` is not supported currently. You can subscribe to https://github.com/huggingface/datasets/issues/5760 to see when it will be.\r\n\r\nInstead, you can load the dataset with `Dataset.from_generator`:\r\n```python\r\nimport json\r\nfrom datasets import Dataset, Value, Image, Features\r\n\r\ndef gen():\r\n with open(\"./dataset/metadata.jsonl\") as f:\r\n for line in f:\r\n line = json.loads(line)\r\n yield {\"caption\": line[\"caption\"], \"image\": os.path.join(\"./dataset\", line[\"image\"], \"gaussian_padded_image\": os.path.join(\"./dataset\", line[\"gaussian_padded_image\"]))}\r\n\r\nfeatures = Features({\"caption\": Value(\"string\"), \"image\": Image(), \"gaussian_padded_image\": Image()})\r\ndataset = Dataset.from_generator(gen, features=features)\r\n```\r\n(E.g., if you want to share this dataset on the Hub, you can call `dataset.push_to_hub(...)` afterward)",
"hi Thanks for sharing this, Actually I was trying with a webdataset format of the data as well and it did'nt work. Could you share how i can create Dataset object from webdataset format of this data?"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6777/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6777 | https://github.com/huggingface/datasets/issues/6777 | false |
2,223,457,792 | https://api.github.com/repos/huggingface/datasets/issues/6775/labels{/name} | ### Describe the bug
I am trying to fine-tune llama2-7b model in GCP. The notebook I am using for this can be found [here](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_pytorch_llama2_peft_finetuning.ipynb).
When I use the dataset given in the example, the training gets successfully completed (example dataset can be found [here](https://huggingface.co/datasets/timdettmers/openassistant-guanaco)).
However when I use my own dataset which is in the same format as the example dataset, I get the below error (my dataset can be found [here](https://huggingface.co/datasets/kk2491/finetune_dataset_002)).
![image](https://github.com/huggingface/datasets/assets/38481564/47fa2de3-95e0-478b-a35f-58cbaf90427a)
I see the files are being read correctly from the logs:
![image](https://github.com/huggingface/datasets/assets/38481564/b0b6316c-2cc7-476c-9674-ca2222c8f4e3)
### Steps to reproduce the bug
1. Clone the [vertex-ai-samples](https://github.com/GoogleCloudPlatform/vertex-ai-samples) repository.
2. Run the [llama2-7b peft fine-tuning](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_pytorch_llama2_peft_finetuning.ipynb).
3. Change the dataset `kk2491/finetune_dataset_002`
### Expected behavior
The training should complete successfully, and model gets deployed to an endpoint.
### Environment info
Python version : Python 3.10.12
Dataset : https://huggingface.co/datasets/kk2491/finetune_dataset_002
| 2024-04-08T01:24:35Z | 6,775 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-03T17:06:30Z | https://api.github.com/repos/huggingface/datasets/issues/6775/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6775/timeline | IndexError: Invalid key: 0 is out of bounds for size 0 | https://api.github.com/repos/huggingface/datasets/issues/6775/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/38481564?v=4",
"events_url": "https://api.github.com/users/kk2491/events{/privacy}",
"followers_url": "https://api.github.com/users/kk2491/followers",
"following_url": "https://api.github.com/users/kk2491/following{/other_user}",
"gists_url": "https://api.github.com/users/kk2491/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kk2491",
"id": 38481564,
"login": "kk2491",
"node_id": "MDQ6VXNlcjM4NDgxNTY0",
"organizations_url": "https://api.github.com/users/kk2491/orgs",
"received_events_url": "https://api.github.com/users/kk2491/received_events",
"repos_url": "https://api.github.com/users/kk2491/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kk2491/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kk2491/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kk2491"
} | [] | null | null | NONE | null | null | I_kwDODunzps6Eh0YA | [
"Same problem.",
"Hi! You should be able to fix this by passing `remove_unused_columns=False` to the `transformers` `TrainingArguments` as explained in https://github.com/huggingface/peft/issues/1299.\r\n\r\n(I'm not familiar with Vertex AI, but I'd assume `remove_unused_columns` can be passed as a flag to the docker container) ",
"I had the same problem, but I spent a whole day trying different combination with my own dataset with the example data set and found the reason: the example data is multi-turn conversation between human and assistant, so # Humman or # Assistant appear at least twice. If your own custom data only has single turn conversation, it might end up with the same error. What you can do is repeat your single turn conversation twice in your training data (keep the key 'text' the same) and maybe it works. I guess the reason is the specific way processing the data requires and counts multi-turn only (single turn will be discarded so it ends up with no training data), but since I am using Google Vertex AI, I don't have direct access to the underlying code so that was just my guess. ",
"> Hi! You should be able to fix this by passing `remove_unused_columns=False` to the `transformers` `TrainingArguments` as explained in [huggingface/peft#1299](https://github.com/huggingface/peft/issues/1299).\r\n> \r\n> (I'm not familiar with Vertex AI, but I'd assume `remove_unused_columns` can be passed as a flag to the docker container)\r\n\r\n@mariosasko Thanks for the response and suggestion. \r\nWhen I set `remove_unused_columns` as `False` , I end up getting different error (will post the error soon). \r\nEither the Vertex-AI does not support `remove_unused_columns` or my dataset is completely wrong. \r\n\r\nThank you, \r\nKK",
"> I had the same problem, but I spent a whole day trying different combination with my own dataset with the example data set and found the reason: the example data is multi-turn conversation between human and assistant, so # Humman or # Assistant appear at least twice. If your own custom data only has single turn conversation, it might end up with the same error. What you can do is repeat your single turn conversation twice in your training data (keep the key 'text' the same) and maybe it works. I guess the reason is the specific way processing the data requires and counts multi-turn only (single turn will be discarded so it ends up with no training data), but since I am using Google Vertex AI, I don't have direct access to the underlying code so that was just my guess.\r\n\r\n@cyberyu Thanks for your suggestions. \r\nI have tried the approach you suggested, copied the same conversation in each jsonl element so every jsonl item has 2 `HUMAN` and `ASSISTANT`. \r\nHowever in my case, the issue persists. I am gonna give few more tries, and post the results here. \r\nYou can find my dataset [here](https://huggingface.co/datasets/kk2491/test/tree/main) \r\n\r\nThank you, \r\nKK ",
"> > I had the same problem, but I spent a whole day trying different combination with my own dataset with the example data set and found the reason: the example data is multi-turn conversation between human and assistant, so # Humman or # Assistant appear at least twice. If your own custom data only has single turn conversation, it might end up with the same error. What you can do is repeat your single turn conversation twice in your training data (keep the key 'text' the same) and maybe it works. I guess the reason is the specific way processing the data requires and counts multi-turn only (single turn will be discarded so it ends up with no training data), but since I am using Google Vertex AI, I don't have direct access to the underlying code so that was just my guess.\r\n> \r\n> @cyberyu Thanks for your suggestions. I have tried the approach you suggested, copied the same conversation in each jsonl element so every jsonl item has 2 `HUMAN` and `ASSISTANT`. However in my case, the issue persists. I am gonna give few more tries, and post the results here. You can find my dataset [here](https://huggingface.co/datasets/kk2491/test/tree/main)\r\n> \r\n> Thank you, KK\r\n\r\nI think another reason is your training sample length is too short. I saw a relevant report (https://discuss.huggingface.co/t/indexerror-invalid-key-16-is-out-of-bounds-for-size-0/14298/16) stating that the processing code might have a bug discarding sequence length short than max_seq_length, which is 512. Not sure the Vertex AI backend code has fixed that bug or not. So I tried to add some garbage content in your data, and extended the length longer than 512 for a single turn, and repeated twice. You can copy the following line as 5 repeated lines as your training data jsonl file of five samples (no eval or test needed, for speed up, set evaluation step to 5 and training step to 10,), and it will pass.\r\n\r\n{\"text\":\"### Human: You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You will handle customers queries and provide effective help message. Please provide response to 'Can Interplai software optimize routes for minimizing package handling and transfer times in distribution centers'? ### Assistant: Yes, Interplai software can optimize routes for distribution centers by streamlining package handling processes, minimizing transfer times between loading docks and storage areas, and optimizing warehouse layouts for efficient order fulfillment. ### Human: You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You will handle customers queries and provide effective help message. Please provide response to 'Can Interplai software optimize routes for minimizing package handling and transfer times in distribution centers'? ### Assistant: Yes, Interplai software can optimize routes for distribution centers by streamlining package handling processes, minimizing transfer times between loading docks and storage areas, and optimizing warehouse layouts for efficient order fulfillment.\"}\r\n",
"@cyberyu **Thank you so much, You saved my day (+ so many days)**. \r\nI tried the example you provided above, and the training is successfully completed in Vertex-AI (through GUI). \r\nI never thought there would be constraints on the length of the samples and also on the number of turns. \r\nI will update my complete dataset and see update here once the training is completed. \r\n\r\nThank you, \r\nKK "
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6775/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6775 | https://github.com/huggingface/datasets/issues/6775 | false |
2,222,164,316 | https://api.github.com/repos/huggingface/datasets/issues/6774/labels{/name} | ### Describe the bug
When I create a dataset, it gets stuck while generating cached data.
The image format is PNG, and it will not get stuck when the image format is jpeg.
![image](https://github.com/huggingface/datasets/assets/22740819/3b888fd8-e6d6-488f-b828-95a8f206a152)
After debugging, I know that it is because of the `pa.array` operation in [arrow_writer](https://github.com/huggingface/datasets/blob/2.13.0/src/datasets/arrow_writer.py#L553), but i don't why.
### Steps to reproduce the bug
```
from datasets import Dataset
def generator(lines):
for line in lines:
img = Image.open(open(line["url"], "rb"))
# print(img.format) # "PNG"
yield {
"image": img,
}
lines = open(dataset_path, "r")
dataset = Dataset.from_generator(
generator,
gen_kwargs={"lines": lines}
)
```
### Expected behavior
Generating split done.
### Environment info
datasets 2.13.0 | 2024-04-03T07:47:31Z | 6,774 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-03T07:47:31Z | https://api.github.com/repos/huggingface/datasets/issues/6774/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6774/timeline | Generating split is very slow when Image format is PNG | https://api.github.com/repos/huggingface/datasets/issues/6774/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/22740819?v=4",
"events_url": "https://api.github.com/users/Tramac/events{/privacy}",
"followers_url": "https://api.github.com/users/Tramac/followers",
"following_url": "https://api.github.com/users/Tramac/following{/other_user}",
"gists_url": "https://api.github.com/users/Tramac/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Tramac",
"id": 22740819,
"login": "Tramac",
"node_id": "MDQ6VXNlcjIyNzQwODE5",
"organizations_url": "https://api.github.com/users/Tramac/orgs",
"received_events_url": "https://api.github.com/users/Tramac/received_events",
"repos_url": "https://api.github.com/users/Tramac/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Tramac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tramac/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Tramac"
} | [] | null | null | NONE | null | null | I_kwDODunzps6Ec4lc | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6774/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6774 | https://github.com/huggingface/datasets/issues/6774 | false |
2,221,049,121 | https://api.github.com/repos/huggingface/datasets/issues/6773/labels{/name} | ### Describe the bug
Hi, I have a dataset on the hub [here](https://huggingface.co/datasets/manestay/borderlines). It has 1k+ downloads, which I sure is mostly just me and my colleagues working with it. It should have far fewer, since I'm using the same machine with a properly set up HF_HOME variable. However, whenever I run the below function `load_borderlines_hf`, it downloads the entire dataset from the hub and then does the other logic:
https://github.com/manestay/borderlines/blob/4e161f444661e2ebfe643f3fe149d9258d63a57d/run_gpt/lib.py#L80
Let me know what I'm doing wrong here, or if it's a bug with the `datasets` library itself. On the hub I have my data stored in CSVs, but several columns are lists, so that's why I have the code to map splitting on `;`. I looked into dataset loading scripts, but it seemed difficult to set up. I have verified that other `datasets` and `models` on my system are using the cache properly (e.g. I have a 13B parameter model and large datasets, but those are cached and don't redownload).
__EDIT: __ as pointed out in the discussion below, it may be the `map()` calls that aren't being cached properly. Supposing the `load_dataset()` retrieve from the cache, then it should be the case that the `map()` calls also retrieve from the cached output. But the `map()` commands re-execute sometimes.
### Steps to reproduce the bug
1. Copy and paste the function from [here](https://github.com/manestay/borderlines/blob/4e161f444661e2ebfe643f3fe149d9258d63a57d/run_gpt/lib.py#L80) (lines 80-100)
2. Run it in Python `load_borderlines_hf(None)`
3. It completes successfully, downloading from HF hub, then doing the mapping logic etc.
4. If you run it again after some time, it will re-download, ignoring the cache
### Expected behavior
Re-running the code, which calls `datasets.load_dataset('manestay/borderlines', 'territories')`, should use the cached version
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.14.21-150500.55.7-default-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.10.0 | 2024-04-08T18:43:45Z | 6,773 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-02T17:23:22Z | https://api.github.com/repos/huggingface/datasets/issues/6773/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6773/timeline | Dataset on Hub re-downloads every time? | https://api.github.com/repos/huggingface/datasets/issues/6773/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/9099139?v=4",
"events_url": "https://api.github.com/users/manestay/events{/privacy}",
"followers_url": "https://api.github.com/users/manestay/followers",
"following_url": "https://api.github.com/users/manestay/following{/other_user}",
"gists_url": "https://api.github.com/users/manestay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/manestay",
"id": 9099139,
"login": "manestay",
"node_id": "MDQ6VXNlcjkwOTkxMzk=",
"organizations_url": "https://api.github.com/users/manestay/orgs",
"received_events_url": "https://api.github.com/users/manestay/received_events",
"repos_url": "https://api.github.com/users/manestay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/manestay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manestay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/manestay"
} | [] | null | completed | NONE | 2024-04-08T18:43:45Z | null | I_kwDODunzps6EYoUh | [
"The caching works as expected when I try to reproduce this locally or on Colab...",
"hi @mariosasko , Thank you for checking. I also tried running this again just now, and it seems like the `load_dataset()` caches properly (though I'll double check later).\r\n\r\nI think the issue might be in the caching of the function output for `territories.map(lambda row: {'Claimants': row['Claimants'].split(';')})`. My current run re-ran this, even though I have run this many times before, and as demonstrated by loading from cache, the loaded dataset is the same.\r\n\r\nI wonder if the issue stems from using CSV output. Do you recommend changing to Parquet, and if so, is there an easy way to take the already uploaded data on the Hub and reformat?",
"This issue seems similar to https://github.com/huggingface/datasets/issues/6184 (`dill` serializes objects defined outside the `__main__` module by reference). You should be able to work around this limitation by defining the lambdas outside of `load_borderlines_hf` (as module variables) and then setting their `__module__` attribute's value to `None` to force serializing them by value, e.g., like this: \r\n```python\r\nsplit_Claimants_row = lambda row: {'Claimants': row['Claimants'].split(';')}\r\nsplit_Claimants_row.__module__ = None\r\n```",
"Thank you, I'll give this a try. Your fix makes sense to me, so this issue can be closed for now.\r\n\r\nUnrelated comment -- for \"Downloads last month\" on the hub page, I'm assuming for this project that each downloaded CSV is 1 download? The dataset consists of 51 CSVs, so I'm trying to see why it's incrementing so quickly (1125 2 days ago, 1246 right now).",
"This doc explains how we count \"Downloads last month\": https://huggingface.co/docs/hub/datasets-download-stats"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6773/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6773 | https://github.com/huggingface/datasets/issues/6773 | false |
2,220,851,533 | https://api.github.com/repos/huggingface/datasets/issues/6772/labels{/name} | Use more consistent wording in `remove_columns` to explain why it's faster than `map` and update `remove_columns`/`rename_columns` docstrings to fix in-place calls.
Reported in https://github.com/huggingface/datasets/issues/6700 | 2024-04-02T16:28:45Z | 6,772 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-04-02T15:41:28Z | https://api.github.com/repos/huggingface/datasets/issues/6772/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6772/timeline | `remove_columns`/`rename_columns` doc fixes | https://api.github.com/repos/huggingface/datasets/issues/6772/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2024-04-02T16:17:46Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6772.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6772",
"merged_at": "2024-04-02T16:17:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6772.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6772"
} | PR_kwDODunzps5rdKZ2 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6772). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005728 / 0.011353 (-0.005624) | 0.003809 / 0.011008 (-0.007199) | 0.062930 / 0.038508 (0.024422) | 0.032320 / 0.023109 (0.009211) | 0.251072 / 0.275898 (-0.024826) | 0.275397 / 0.323480 (-0.048083) | 0.003314 / 0.007986 (-0.004671) | 0.002869 / 0.004328 (-0.001460) | 0.049070 / 0.004250 (0.044819) | 0.049282 / 0.037052 (0.012229) | 0.263546 / 0.258489 (0.005057) | 0.291471 / 0.293841 (-0.002370) | 0.028462 / 0.128546 (-0.100084) | 0.010528 / 0.075646 (-0.065119) | 0.211249 / 0.419271 (-0.208023) | 0.036840 / 0.043533 (-0.006693) | 0.250038 / 0.255139 (-0.005101) | 0.268883 / 0.283200 (-0.014317) | 0.021417 / 0.141683 (-0.120266) | 1.139754 / 1.452155 (-0.312400) | 1.197319 / 1.492716 (-0.295397) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094191 / 0.018006 (0.076185) | 0.302413 / 0.000490 (0.301923) | 0.000220 / 0.000200 (0.000020) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018490 / 0.037411 (-0.018922) | 0.063361 / 0.014526 (0.048835) | 0.075854 / 0.176557 (-0.100702) | 0.121499 / 0.737135 (-0.615637) | 0.075982 / 0.296338 (-0.220356) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286030 / 0.215209 (0.070821) | 2.778487 / 2.077655 (0.700832) | 1.440963 / 1.504120 (-0.063157) | 1.326217 / 1.541195 (-0.214977) | 1.359228 / 1.468490 (-0.109262) | 0.566999 / 4.584777 (-4.017778) | 2.453344 / 3.745712 (-1.292368) | 2.841448 / 5.269862 (-2.428413) | 1.825197 / 4.565676 (-2.740479) | 0.062301 / 0.424275 (-0.361974) | 0.004948 / 0.007607 (-0.002659) | 0.334578 / 0.226044 (0.108534) | 3.302327 / 2.268929 (1.033399) | 1.799808 / 55.444624 (-53.644817) | 1.529693 / 6.876477 (-5.346783) | 1.564684 / 2.142072 (-0.577389) | 0.632891 / 4.805227 (-4.172336) | 0.116594 / 6.500664 (-6.384070) | 0.042695 / 0.075469 (-0.032774) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.999994 / 1.841788 (-0.841794) | 12.767365 / 8.074308 (4.693057) | 10.550439 / 10.191392 (0.359047) | 0.133437 / 0.680424 (-0.546986) | 0.015252 / 0.534201 (-0.518949) | 0.293285 / 0.579283 (-0.285998) | 0.274773 / 0.434364 (-0.159590) | 0.328718 / 0.540337 (-0.211619) | 0.428021 / 1.386936 (-0.958915) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005538 / 0.011353 (-0.005815) | 0.003738 / 0.011008 (-0.007271) | 0.050179 / 0.038508 (0.011671) | 0.032441 / 0.023109 (0.009332) | 0.294721 / 0.275898 (0.018823) | 0.322616 / 0.323480 (-0.000864) | 0.004255 / 0.007986 (-0.003731) | 0.002913 / 0.004328 (-0.001416) | 0.049044 / 0.004250 (0.044794) | 0.042361 / 0.037052 (0.005309) | 0.304162 / 0.258489 (0.045673) | 0.332757 / 0.293841 (0.038916) | 0.029355 / 0.128546 (-0.099191) | 0.010546 / 0.075646 (-0.065100) | 0.058213 / 0.419271 (-0.361058) | 0.032648 / 0.043533 (-0.010885) | 0.298241 / 0.255139 (0.043102) | 0.313710 / 0.283200 (0.030510) | 0.017836 / 0.141683 (-0.123847) | 1.135050 / 1.452155 (-0.317104) | 1.178277 / 1.492716 (-0.314439) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094387 / 0.018006 (0.076381) | 0.301955 / 0.000490 (0.301466) | 0.000220 / 0.000200 (0.000020) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023135 / 0.037411 (-0.014276) | 0.078109 / 0.014526 (0.063583) | 0.087519 / 0.176557 (-0.089037) | 0.127815 / 0.737135 (-0.609320) | 0.090107 / 0.296338 (-0.206231) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289149 / 0.215209 (0.073940) | 2.832354 / 2.077655 (0.754699) | 1.574003 / 1.504120 (0.069883) | 1.449190 / 1.541195 (-0.092005) | 1.465798 / 1.468490 (-0.002692) | 0.561953 / 4.584777 (-4.022824) | 2.445788 / 3.745712 (-1.299924) | 2.882453 / 5.269862 (-2.387409) | 1.813267 / 4.565676 (-2.752409) | 0.063163 / 0.424275 (-0.361112) | 0.005785 / 0.007607 (-0.001822) | 0.340125 / 0.226044 (0.114081) | 3.355370 / 2.268929 (1.086442) | 1.924226 / 55.444624 (-53.520398) | 1.643242 / 6.876477 (-5.233234) | 1.650149 / 2.142072 (-0.491924) | 0.654818 / 4.805227 (-4.150409) | 0.114968 / 6.500664 (-6.385696) | 0.042044 / 0.075469 (-0.033425) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.024867 / 1.841788 (-0.816921) | 12.656140 / 8.074308 (4.581832) | 10.927014 / 10.191392 (0.735622) | 0.155929 / 0.680424 (-0.524495) | 0.015356 / 0.534201 (-0.518845) | 0.289834 / 0.579283 (-0.289449) | 0.280889 / 0.434364 (-0.153475) | 0.331490 / 0.540337 (-0.208847) | 0.418037 / 1.386936 (-0.968899) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ad3467e9b138d1a9b87b661828a71139f4e46ece \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6772/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6772 | https://github.com/huggingface/datasets/pull/6772 | true |
2,220,131,457 | https://api.github.com/repos/huggingface/datasets/issues/6771/labels{/name} | ### Discussed in https://github.com/huggingface/datasets/discussions/6768
<div type='discussions-op-text'>
<sup>Originally posted by **RitchieP** April 1, 2024</sup>
Currently, I have a dataset hosted on Huggingface with a custom script [here](https://huggingface.co/datasets/RitchieP/VerbaLex_voice).
I'm loading my dataset as below.
```py
from datasets import load_dataset, IterableDatasetDict
dataset = IterableDatasetDict()
dataset["train"] = load_dataset("RitchieP/VerbaLex_voice", "ar", split="train", use_auth_token=True, streaming=True)
dataset["test"] = load_dataset("RitchieP/VerbaLex_voice", "ar", split="test", use_auth_token=True, streaming=True)
```
And when I try to see the data I have loaded with
```py
list(dataset["train"].take(1))
```
And it gives me this stack trace
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Cell In[2], line 1
----> 1 list(dataset["train"].take(1))
File /opt/conda/lib/python3.10/site-packages/datasets/iterable_dataset.py:1388, in IterableDataset.__iter__(self)
1385 yield formatter.format_row(pa_table)
1386 return
-> 1388 for key, example in ex_iterable:
1389 if self.features:
1390 # `IterableDataset` automatically fills missing columns with None.
1391 # This is done with `_apply_feature_types_on_example`.
1392 example = _apply_feature_types_on_example(
1393 example, self.features, token_per_repo_id=self._token_per_repo_id
1394 )
File /opt/conda/lib/python3.10/site-packages/datasets/iterable_dataset.py:1044, in TakeExamplesIterable.__iter__(self)
1043 def __iter__(self):
-> 1044 yield from islice(self.ex_iterable, self.n)
File /opt/conda/lib/python3.10/site-packages/datasets/iterable_dataset.py:234, in ExamplesIterable.__iter__(self)
233 def __iter__(self):
--> 234 yield from self.generate_examples_fn(**self.kwargs)
File ~/.cache/huggingface/modules/datasets_modules/datasets/RitchieP--VerbaLex_voice/9465eaee58383cf9d7c3e14111d7abaea56398185a641b646897d6df4e4732f7/VerbaLex_voice.py:127, in VerbaLexVoiceDataset._generate_examples(self, local_extracted_archive_paths, archives, meta_path)
125 for i, audio_archive in enumerate(archives):
126 print(audio_archive)
--> 127 for path, file in audio_archive:
128 _, filename = os.path.split(path)
129 if filename in metadata:
File /opt/conda/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:869, in _IterableFromGenerator.__iter__(self)
868 def __iter__(self):
--> 869 yield from self.generator(*self.args, **self.kwargs)
File /opt/conda/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:919, in ArchiveIterable._iter_from_urlpath(cls, urlpath, download_config)
915 @classmethod
916 def _iter_from_urlpath(
917 cls, urlpath: str, download_config: Optional[DownloadConfig] = None
918 ) -> Generator[Tuple, None, None]:
--> 919 compression = _get_extraction_protocol(urlpath, download_config=download_config)
920 # Set block_size=0 to get faster streaming
921 # (e.g. for hf:// and https:// it uses streaming Requests file-like instances)
922 with xopen(urlpath, "rb", download_config=download_config, block_size=0) as f:
File /opt/conda/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:400, in _get_extraction_protocol(urlpath, download_config)
398 urlpath, storage_options = _prepare_path_and_storage_options(urlpath, download_config=download_config)
399 try:
--> 400 with fsspec.open(urlpath, **(storage_options or {})) as f:
401 return _get_extraction_protocol_with_magic_number(f)
402 except FileNotFoundError:
File /opt/conda/lib/python3.10/site-packages/fsspec/core.py:100, in OpenFile.__enter__(self)
97 def __enter__(self):
98 mode = self.mode.replace("t", "").replace("b", "") + "b"
--> 100 f = self.fs.open(self.path, mode=mode)
102 self.fobjects = [f]
104 if self.compression is not None:
File /opt/conda/lib/python3.10/site-packages/fsspec/spec.py:1307, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)
1305 else:
1306 ac = kwargs.pop("autocommit", not self._intrans)
-> 1307 f = self._open(
1308 path,
1309 mode=mode,
1310 block_size=block_size,
1311 autocommit=ac,
1312 cache_options=cache_options,
1313 **kwargs,
1314 )
1315 if compression is not None:
1316 from fsspec.compression import compr
File /opt/conda/lib/python3.10/site-packages/fsspec/implementations/local.py:180, in LocalFileSystem._open(self, path, mode, block_size, **kwargs)
178 if self.auto_mkdir and "w" in mode:
179 self.makedirs(self._parent(path), exist_ok=True)
--> 180 return LocalFileOpener(path, mode, fs=self, **kwargs)
File /opt/conda/lib/python3.10/site-packages/fsspec/implementations/local.py:302, in LocalFileOpener.__init__(self, path, mode, autocommit, fs, compression, **kwargs)
300 self.compression = get_compression(path, compression)
301 self.blocksize = io.DEFAULT_BUFFER_SIZE
--> 302 self._open()
File /opt/conda/lib/python3.10/site-packages/fsspec/implementations/local.py:307, in LocalFileOpener._open(self)
305 if self.f is None or self.f.closed:
306 if self.autocommit or "w" not in self.mode:
--> 307 self.f = open(self.path, mode=self.mode)
308 if self.compression:
309 compress = compr[self.compression]
FileNotFoundError: [Errno 2] No such file or directory: '/kaggle/working/h'
```
After looking into the stack trace, and referring to the source codes, it looks like its trying to access a directory in the notebook's environment and I don't understand why.
Not sure if its a bug in Datasets library, so I'm opening a discussions first. Feel free to ask for more information if needed. Appreciate any help in advance!</div>
Hi, referring to the discussion title above, after further digging, I think it's an issue within the datasets library. But not quite sure where it is.
If you require any more info or actions from me, please let me know. Appreciate any help in advance! | 2024-04-04T14:22:03Z | 6,771 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-02T10:24:57Z | https://api.github.com/repos/huggingface/datasets/issues/6771/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6771/timeline | Datasets FileNotFoundError when trying to generate examples. | https://api.github.com/repos/huggingface/datasets/issues/6771/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/26197115?v=4",
"events_url": "https://api.github.com/users/RitchieP/events{/privacy}",
"followers_url": "https://api.github.com/users/RitchieP/followers",
"following_url": "https://api.github.com/users/RitchieP/following{/other_user}",
"gists_url": "https://api.github.com/users/RitchieP/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/RitchieP",
"id": 26197115,
"login": "RitchieP",
"node_id": "MDQ6VXNlcjI2MTk3MTE1",
"organizations_url": "https://api.github.com/users/RitchieP/orgs",
"received_events_url": "https://api.github.com/users/RitchieP/received_events",
"repos_url": "https://api.github.com/users/RitchieP/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/RitchieP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RitchieP/subscriptions",
"type": "User",
"url": "https://api.github.com/users/RitchieP"
} | [] | null | completed | NONE | 2024-04-04T14:22:03Z | null | I_kwDODunzps6EVISB | [
"Hi! I've opened a PR in the repo to fix this issue: https://huggingface.co/datasets/RitchieP/VerbaLex_voice/discussions/6",
"@mariosasko Thanks for the PR and help! Guess I could close the issue for now. Appreciate the help!"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6771/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6771 | https://github.com/huggingface/datasets/issues/6771 | false |
2,218,991,883 | https://api.github.com/repos/huggingface/datasets/issues/6770/labels{/name} | ### Describe the bug
`Datasets==2.18.0` is not compatible with `fsspec==2023.12.2`.
I have to downgrade fsspec to `fsspec==2023.10.0` to make `Datasets==2.18.0` work properly.
### Steps to reproduce the bug
To reproduce the bug:
1. Make sure that `Datasets==2.18.0` and `fsspec==2023.12.2`.
2. Run the following code:
```
from datasets import load_dataset
dataset = load_dataset("trec")
```
3. Then one will get the following error message:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2556, in load_dataset
builder_instance = load_dataset_builder(
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2265, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/opt/conda/lib/python3.10/site-packages/datasets/builder.py", line 371, in __init__
self.config, self.config_id = self._create_builder_config(
File "/opt/conda/lib/python3.10/site-packages/datasets/builder.py", line 620, in _create_builder_config
builder_config._resolve_data_files(
File "/opt/conda/lib/python3.10/site-packages/datasets/builder.py", line 211, in _resolve_data_files
self.data_files = self.data_files.resolve(base_path, download_config)
File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 799, in resolve
out[key] = data_files_patterns_list.resolve(base_path, download_config)
File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 752, in resolve
resolve_pattern(
File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 393, in resolve_pattern
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find 'hf://datasets/trec@65752bf53af25bc935a0dce92fb5b6c930728450/default/train/0000.parquet' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip']
```
4. Similar issue also found for the following code:
```
dataset = load_dataset("sst", "default")
```
### Expected behavior
If the dataset is loaded correctly, one will have:
```
>>> print(dataset)
DatasetDict({
train: Dataset({
features: ['text', 'coarse_label', 'fine_label'],
num_rows: 5452
})
test: Dataset({
features: ['text', 'coarse_label', 'fine_label'],
num_rows: 500
})
})
>>>
```
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-6.2.0-35-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.1
- Pandas version: 2.2.1
- `fsspec` version: 2023.12.2 | 2024-04-03T13:42:29Z | 6,770 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-01T20:17:48Z | https://api.github.com/repos/huggingface/datasets/issues/6770/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6770/timeline | [Bug Report] `datasets==2.18.0` is not compatible with `fsspec==2023.12.2` | https://api.github.com/repos/huggingface/datasets/issues/6770/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/19348888?v=4",
"events_url": "https://api.github.com/users/fshp971/events{/privacy}",
"followers_url": "https://api.github.com/users/fshp971/followers",
"following_url": "https://api.github.com/users/fshp971/following{/other_user}",
"gists_url": "https://api.github.com/users/fshp971/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fshp971",
"id": 19348888,
"login": "fshp971",
"node_id": "MDQ6VXNlcjE5MzQ4ODg4",
"organizations_url": "https://api.github.com/users/fshp971/orgs",
"received_events_url": "https://api.github.com/users/fshp971/received_events",
"repos_url": "https://api.github.com/users/fshp971/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fshp971/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fshp971/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fshp971"
} | [] | null | null | NONE | null | null | I_kwDODunzps6EQyEL | [
"You should be able to fix this by updating `huggingface_hub` with `pip install -U huggingface_hub`. We use this package under the hood to resolve the Hub's files."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6770/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6770 | https://github.com/huggingface/datasets/issues/6770 | false |
2,218,242,015 | https://api.github.com/repos/huggingface/datasets/issues/6769/labels{/name} | ### Feature request
Hi thanks for the library! I would like to have a huggingface Dataset, and one of its column is custom (non-serializable) Python objects. For example, a minimal code:
```
class MyClass:
pass
dataset = datasets.Dataset.from_list([
dict(a=MyClass(), b='hello'),
])
```
It gives error:
```
ArrowInvalid: Could not convert <__main__.MyClass object at 0x7a852830d050> with type MyClass: did not recognize Python value type when inferring an Arrow data type
```
I guess it is because Dataset forces to convert everything into arrow format. However, is there any ways to make the scenario work? Thanks!
### Motivation
(see above)
### Your contribution
Yes, I am happy to PR!
Cross-posted: https://discuss.huggingface.co/t/datasets-with-custom-python-objects/79050?u=fzyzcjy
EDIT: possibly related https://github.com/huggingface/datasets/issues/5766 | 2024-04-01T13:36:58Z | 6,769 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-04-01T13:18:47Z | https://api.github.com/repos/huggingface/datasets/issues/6769/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6769/timeline | (Willing to PR) Datasets with custom python objects | https://api.github.com/repos/huggingface/datasets/issues/6769/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/5236035?v=4",
"events_url": "https://api.github.com/users/fzyzcjy/events{/privacy}",
"followers_url": "https://api.github.com/users/fzyzcjy/followers",
"following_url": "https://api.github.com/users/fzyzcjy/following{/other_user}",
"gists_url": "https://api.github.com/users/fzyzcjy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fzyzcjy",
"id": 5236035,
"login": "fzyzcjy",
"node_id": "MDQ6VXNlcjUyMzYwMzU=",
"organizations_url": "https://api.github.com/users/fzyzcjy/orgs",
"received_events_url": "https://api.github.com/users/fzyzcjy/received_events",
"repos_url": "https://api.github.com/users/fzyzcjy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fzyzcjy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fzyzcjy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fzyzcjy"
} | [] | null | null | NONE | null | null | I_kwDODunzps6EN6_f | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6769/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6769 | https://github.com/huggingface/datasets/issues/6769 | false |
2,217,065,412 | https://api.github.com/repos/huggingface/datasets/issues/6767/labels{/name} | Fixed the issue #6755 on the typo mistake | 2024-04-02T14:14:02Z | 6,767 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-31T16:13:37Z | https://api.github.com/repos/huggingface/datasets/issues/6767/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6767/timeline | fixing the issue 6755(small typo) | https://api.github.com/repos/huggingface/datasets/issues/6767/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/63234112?v=4",
"events_url": "https://api.github.com/users/JINO-ROHIT/events{/privacy}",
"followers_url": "https://api.github.com/users/JINO-ROHIT/followers",
"following_url": "https://api.github.com/users/JINO-ROHIT/following{/other_user}",
"gists_url": "https://api.github.com/users/JINO-ROHIT/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JINO-ROHIT",
"id": 63234112,
"login": "JINO-ROHIT",
"node_id": "MDQ6VXNlcjYzMjM0MTEy",
"organizations_url": "https://api.github.com/users/JINO-ROHIT/orgs",
"received_events_url": "https://api.github.com/users/JINO-ROHIT/received_events",
"repos_url": "https://api.github.com/users/JINO-ROHIT/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JINO-ROHIT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JINO-ROHIT/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JINO-ROHIT"
} | [] | null | null | CONTRIBUTOR | 2024-04-02T14:01:18Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6767.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6767",
"merged_at": "2024-04-02T14:01:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6767.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6767"
} | PR_kwDODunzps5rQO9J | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6767). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005526 / 0.011353 (-0.005827) | 0.003839 / 0.011008 (-0.007169) | 0.064027 / 0.038508 (0.025519) | 0.032316 / 0.023109 (0.009206) | 0.250707 / 0.275898 (-0.025191) | 0.269222 / 0.323480 (-0.054258) | 0.004335 / 0.007986 (-0.003651) | 0.002703 / 0.004328 (-0.001626) | 0.049621 / 0.004250 (0.045370) | 0.047499 / 0.037052 (0.010446) | 0.262362 / 0.258489 (0.003873) | 0.292765 / 0.293841 (-0.001076) | 0.028661 / 0.128546 (-0.099885) | 0.010835 / 0.075646 (-0.064811) | 0.208910 / 0.419271 (-0.210362) | 0.036624 / 0.043533 (-0.006909) | 0.247448 / 0.255139 (-0.007691) | 0.270593 / 0.283200 (-0.012607) | 0.018988 / 0.141683 (-0.122695) | 1.141224 / 1.452155 (-0.310931) | 1.204944 / 1.492716 (-0.287772) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096324 / 0.018006 (0.078318) | 0.292495 / 0.000490 (0.292006) | 0.000232 / 0.000200 (0.000032) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018379 / 0.037411 (-0.019032) | 0.065216 / 0.014526 (0.050690) | 0.074071 / 0.176557 (-0.102486) | 0.120793 / 0.737135 (-0.616343) | 0.075882 / 0.296338 (-0.220456) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286354 / 0.215209 (0.071145) | 2.800766 / 2.077655 (0.723111) | 1.474126 / 1.504120 (-0.029994) | 1.358232 / 1.541195 (-0.182963) | 1.400639 / 1.468490 (-0.067851) | 0.578354 / 4.584777 (-4.006423) | 2.454441 / 3.745712 (-1.291271) | 2.927003 / 5.269862 (-2.342859) | 1.826127 / 4.565676 (-2.739550) | 0.063049 / 0.424275 (-0.361226) | 0.005010 / 0.007607 (-0.002597) | 0.342174 / 0.226044 (0.116129) | 3.415900 / 2.268929 (1.146971) | 1.854096 / 55.444624 (-53.590528) | 1.568626 / 6.876477 (-5.307851) | 1.660138 / 2.142072 (-0.481934) | 0.664059 / 4.805227 (-4.141168) | 0.120496 / 6.500664 (-6.380168) | 0.044664 / 0.075469 (-0.030805) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.988434 / 1.841788 (-0.853353) | 12.525563 / 8.074308 (4.451255) | 10.016862 / 10.191392 (-0.174530) | 0.134043 / 0.680424 (-0.546381) | 0.014349 / 0.534201 (-0.519852) | 0.287173 / 0.579283 (-0.292110) | 0.266499 / 0.434364 (-0.167865) | 0.325425 / 0.540337 (-0.214912) | 0.418772 / 1.386936 (-0.968164) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005675 / 0.011353 (-0.005678) | 0.004238 / 0.011008 (-0.006770) | 0.051048 / 0.038508 (0.012540) | 0.033428 / 0.023109 (0.010319) | 0.283406 / 0.275898 (0.007508) | 0.309321 / 0.323480 (-0.014159) | 0.004354 / 0.007986 (-0.003631) | 0.003101 / 0.004328 (-0.001228) | 0.049369 / 0.004250 (0.045119) | 0.043252 / 0.037052 (0.006200) | 0.293097 / 0.258489 (0.034608) | 0.324392 / 0.293841 (0.030551) | 0.030524 / 0.128546 (-0.098022) | 0.010977 / 0.075646 (-0.064669) | 0.058546 / 0.419271 (-0.360726) | 0.033295 / 0.043533 (-0.010238) | 0.284929 / 0.255139 (0.029790) | 0.302925 / 0.283200 (0.019726) | 0.018586 / 0.141683 (-0.123097) | 1.156552 / 1.452155 (-0.295602) | 1.208856 / 1.492716 (-0.283860) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096938 / 0.018006 (0.078932) | 0.305375 / 0.000490 (0.304886) | 0.000227 / 0.000200 (0.000027) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022658 / 0.037411 (-0.014754) | 0.078125 / 0.014526 (0.063599) | 0.087892 / 0.176557 (-0.088665) | 0.127745 / 0.737135 (-0.609390) | 0.089806 / 0.296338 (-0.206533) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292434 / 0.215209 (0.077225) | 2.862329 / 2.077655 (0.784674) | 1.607948 / 1.504120 (0.103828) | 1.487179 / 1.541195 (-0.054016) | 1.542234 / 1.468490 (0.073744) | 0.579446 / 4.584777 (-4.005331) | 2.478549 / 3.745712 (-1.267163) | 2.923493 / 5.269862 (-2.346369) | 1.833161 / 4.565676 (-2.732515) | 0.064289 / 0.424275 (-0.359986) | 0.005638 / 0.007607 (-0.001969) | 0.350111 / 0.226044 (0.124067) | 3.436035 / 2.268929 (1.167107) | 1.970592 / 55.444624 (-53.474032) | 1.717474 / 6.876477 (-5.159002) | 1.753150 / 2.142072 (-0.388922) | 0.660495 / 4.805227 (-4.144732) | 0.119302 / 6.500664 (-6.381362) | 0.042633 / 0.075469 (-0.032836) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.018761 / 1.841788 (-0.823027) | 12.859834 / 8.074308 (4.785525) | 10.547789 / 10.191392 (0.356397) | 0.131986 / 0.680424 (-0.548438) | 0.016469 / 0.534201 (-0.517732) | 0.288585 / 0.579283 (-0.290698) | 0.270499 / 0.434364 (-0.163865) | 0.325801 / 0.540337 (-0.214537) | 0.416551 / 1.386936 (-0.970385) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7599f15537b094bfd18de5af7bb2a482c06d7a0e \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6767/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6767 | https://github.com/huggingface/datasets/pull/6767 | true |
2,215,933,515 | https://api.github.com/repos/huggingface/datasets/issues/6765/labels{/name} | ### Describe the bug
Here is the full error stack when installing:
```
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
datasets 2.18.0 requires fsspec[http]<=2024.2.0,>=2023.1.0, but you have fsspec 2024.3.1 which is incompatible.
Successfully installed aiobotocore-2.12.1 aioitertools-0.11.0 botocore-1.34.51 fsspec-2024.3.1 jmespath-1.0.1 s3fs-2024.3.1 urllib3-2.0.7 wrapt-1.16.0
```
When I install with pip, pip allows this error to exist while still installing s3fs, but this error breaks poetry, since poetry will refuse to install s3fs because of the dependency conflict.
Maybe I'm missing something so maybe it's not a bug but some mistake on my end? Any input would be helpful. Thanks!
### Steps to reproduce the bug
1. conda create -n tmp python=3.10 -y
2. conda activate tmp
3. pip install datasets
4. pip install s3fs
### Expected behavior
I would expect there to be no error.
### Environment info
MacOS (ARM), Python3.10, conda 23.11.0. | 2024-04-03T14:33:12Z | 6,765 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-29T19:57:24Z | https://api.github.com/repos/huggingface/datasets/issues/6765/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6765/timeline | Compatibility issue between s3fs, fsspec, and datasets | https://api.github.com/repos/huggingface/datasets/issues/6765/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/33383515?v=4",
"events_url": "https://api.github.com/users/njbrake/events{/privacy}",
"followers_url": "https://api.github.com/users/njbrake/followers",
"following_url": "https://api.github.com/users/njbrake/following{/other_user}",
"gists_url": "https://api.github.com/users/njbrake/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/njbrake",
"id": 33383515,
"login": "njbrake",
"node_id": "MDQ6VXNlcjMzMzgzNTE1",
"organizations_url": "https://api.github.com/users/njbrake/orgs",
"received_events_url": "https://api.github.com/users/njbrake/received_events",
"repos_url": "https://api.github.com/users/njbrake/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/njbrake/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/njbrake/subscriptions",
"type": "User",
"url": "https://api.github.com/users/njbrake"
} | [] | null | completed | NONE | 2024-04-03T14:33:12Z | null | I_kwDODunzps6EFHZL | [
"Hi! Instead of running `pip install` separately for each package, you should pass all the packages to a single `pip install` call (in this case, `pip install datasets s3fs`) to let `pip` properly resolve their versions.",
"> Hi! Instead of running `pip install` separately for each package, you should pass all the packages to a single `pip install` call (in this case, `pip install datasets s3fs`) to let `pip` properly resolve their versions.\r\n\r\nThanks so much! My inexperience with pip is showing 😆 🙈 "
] | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6765/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6765 | https://github.com/huggingface/datasets/issues/6765 | false |
2,215,767,119 | https://api.github.com/repos/huggingface/datasets/issues/6764/labels{/name} | ### Feature request
Enable the `load_dataset` function to load local datasets with symbolic links.
E.g, this dataset can be loaded:
├── example_dataset/
│ ├── data/
│ │ ├── train/
│ │ │ ├── file0
│ │ │ ├── file1
│ │ ├── dev/
│ │ │ ├── file2
│ │ │ ├── file3
│ ├── metadata.csv
while this dataset can't:
├── example_dataset_symlink/
│ ├── data/
│ │ ├── train/
│ │ │ ├── sym0 -> file0
│ │ │ ├── sym1 -> file1
│ │ ├── dev/
│ │ │ ├── sym2 -> file2
│ │ │ ├── sym3 -> file3
│ ├── metadata.csv
I have created an example dataset in order to reproduce the problem:
1. Unzip `example_dataset.zip`.
2. Run `no_symlink.sh`. Training should start without issues.
3. Run `symlink.sh`. You will see that all four examples will be in train split, instead of having two examples in train and two examples in dev. The script won't load the correct audio files.
[example_dataset.zip](https://github.com/huggingface/datasets/files/14807053/example_dataset.zip)
### Motivation
I have a very large dataset locally. Instead of initiating training on the entire dataset, I need to start training on smaller subsets of the data. Due to the purpose of the experiments I am running, I will need to create many smaller datasets with overlapping data. Instead of copying the all the files for each subset, I would prefer copying symbolic links of the data. This way, the memory usage would not significantly increase beyond the initial dataset size.
Advantages of this approach:
- It would leave a smaller memory footprint on the hard drive
- Creating smaller datasets would be much faster
### Your contribution
I would gladly contribute, if this is something useful to the community. It seems like a simple change of code, something like `file_path = os.path.realpath(file_path)` should be added before loading the files. If anyone has insights on how to incorporate this functionality, I would greatly appreciate your knowledge and input. | 2024-03-29T17:52:27Z | 6,764 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-03-29T17:49:28Z | https://api.github.com/repos/huggingface/datasets/issues/6764/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6764/timeline | load_dataset can't work with symbolic links | https://api.github.com/repos/huggingface/datasets/issues/6764/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/13640533?v=4",
"events_url": "https://api.github.com/users/VladimirVincan/events{/privacy}",
"followers_url": "https://api.github.com/users/VladimirVincan/followers",
"following_url": "https://api.github.com/users/VladimirVincan/following{/other_user}",
"gists_url": "https://api.github.com/users/VladimirVincan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VladimirVincan",
"id": 13640533,
"login": "VladimirVincan",
"node_id": "MDQ6VXNlcjEzNjQwNTMz",
"organizations_url": "https://api.github.com/users/VladimirVincan/orgs",
"received_events_url": "https://api.github.com/users/VladimirVincan/received_events",
"repos_url": "https://api.github.com/users/VladimirVincan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VladimirVincan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VladimirVincan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VladimirVincan"
} | [] | null | null | NONE | null | null | I_kwDODunzps6EEexP | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6764/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6764 | https://github.com/huggingface/datasets/issues/6764 | false |
2,213,440,804 | https://api.github.com/repos/huggingface/datasets/issues/6763/labels{/name} | When a dataset with upper-cases in its name is first loaded using `load_dataset()`, the local cache directory is created with all lowercase letters.
However, upon subsequent loads, the current version attempts to locate the cache directory using the dataset's original name, which includes uppercase letters. This discrepancy can lead to confusion and, particularly in offline mode, results in errors.
### Reproduce
```bash
~$ python
Python 3.9.19 (main, Mar 21 2024, 17:11:28)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
>>> dataset = load_dataset("locuslab/TOFU", "full")
>>> quit()
~$ export HF_DATASETS_OFFLINE=1
~$ python
Python 3.9.19 (main, Mar 21 2024, 17:11:28)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
>>> dataset = load_dataset("locuslab/TOFU", "full")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "xxxxxx/anaconda3/envs/llm/lib/python3.9/site-packages/datasets/load.py", line 2556, in load_dataset
builder_instance = load_dataset_builder(
File "xxxxxx/anaconda3/envs/llm/lib/python3.9/site-packages/datasets/load.py", line 2228, in load_dataset_builder
dataset_module = dataset_module_factory(
File "xxxxxx/anaconda3/envs/llm/lib/python3.9/site-packages/datasets/load.py", line 1871, in dataset_module_factory
raise ConnectionError(f"Couldn't reach the Hugging Face Hub for dataset '{path}': {e1}") from None
ConnectionError: Couldn't reach the Hugging Face Hub for dataset 'locuslab/TOFU': Offline mode is enabled.
>>>
```
I fix this issue by lowering the dataset name (`.lower()`) when generating cache_dir. | 2024-03-28T15:51:46Z | 6,763 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-28T14:52:35Z | https://api.github.com/repos/huggingface/datasets/issues/6763/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6763/timeline | Fix issue with case sensitivity when loading dataset from local cache | https://api.github.com/repos/huggingface/datasets/issues/6763/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/58537872?v=4",
"events_url": "https://api.github.com/users/Sumsky21/events{/privacy}",
"followers_url": "https://api.github.com/users/Sumsky21/followers",
"following_url": "https://api.github.com/users/Sumsky21/following{/other_user}",
"gists_url": "https://api.github.com/users/Sumsky21/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Sumsky21",
"id": 58537872,
"login": "Sumsky21",
"node_id": "MDQ6VXNlcjU4NTM3ODcy",
"organizations_url": "https://api.github.com/users/Sumsky21/orgs",
"received_events_url": "https://api.github.com/users/Sumsky21/received_events",
"repos_url": "https://api.github.com/users/Sumsky21/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Sumsky21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sumsky21/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Sumsky21"
} | [] | null | null | NONE | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6763.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6763",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6763.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6763"
} | PR_kwDODunzps5rENat | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6763/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6763 | https://github.com/huggingface/datasets/pull/6763 | true |
2,213,275,468 | https://api.github.com/repos/huggingface/datasets/issues/6762/labels{/name} | I was trying out polars as an output for a map function and found that it wasn't a valid return type in `validate_function_output`. Thought that we should accommodate this by creating and adding it to the `allowed_processed_input_types` variable. | 2024-03-29T15:44:02Z | 6,762 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-28T13:40:28Z | https://api.github.com/repos/huggingface/datasets/issues/6762/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6762/timeline | Allow polars as valid output type | https://api.github.com/repos/huggingface/datasets/issues/6762/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/11325244?v=4",
"events_url": "https://api.github.com/users/psmyth94/events{/privacy}",
"followers_url": "https://api.github.com/users/psmyth94/followers",
"following_url": "https://api.github.com/users/psmyth94/following{/other_user}",
"gists_url": "https://api.github.com/users/psmyth94/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/psmyth94",
"id": 11325244,
"login": "psmyth94",
"node_id": "MDQ6VXNlcjExMzI1MjQ0",
"organizations_url": "https://api.github.com/users/psmyth94/orgs",
"received_events_url": "https://api.github.com/users/psmyth94/received_events",
"repos_url": "https://api.github.com/users/psmyth94/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/psmyth94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/psmyth94/subscriptions",
"type": "User",
"url": "https://api.github.com/users/psmyth94"
} | [] | null | null | CONTRIBUTOR | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6762.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6762",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6762.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6762"
} | PR_kwDODunzps5rDpBe | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6762). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6762/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6762 | https://github.com/huggingface/datasets/pull/6762 | true |
2,212,805,108 | https://api.github.com/repos/huggingface/datasets/issues/6761/labels{/name} | What does this PR do?
1. remove `list_files_info` in favor of `list_repo_tree`. As of `0.23`, `list_files_info` will be removed for good. `datasets` had a utility to support both pre-0.20 and post-0.20 versions. Since `hfh` version is already pinned to `>=0.21.2`, I removed the legacy part.
2. `preupload_lfs_files` had also a different behavior between `<0.20` and `>=0.20`. I remove it since huggingface_hub is now pinned to `>=0.21.2`
3. `hf_hub_url` is overwritten to default to the dataset repo_type. I do think it is misleading to keep the same method naming for it. I renamed it to `get_dataset_url` for clarity. Let me know if you prefer to see this change reverted. | 2024-03-29T13:27:26Z | 6,761 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-28T09:57:57Z | https://api.github.com/repos/huggingface/datasets/issues/6761/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6761/timeline | Remove deprecated code | https://api.github.com/repos/huggingface/datasets/issues/6761/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Wauplin",
"id": 11801849,
"login": "Wauplin",
"node_id": "MDQ6VXNlcjExODAxODQ5",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Wauplin"
} | [] | null | null | CONTRIBUTOR | 2024-03-29T13:18:13Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6761.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6761",
"merged_at": "2024-03-29T13:18:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6761.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6761"
} | PR_kwDODunzps5rCAu8 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6761). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks for cleaning this :) I'm also fine with renaming `hf_dataset_url` (and not `get_dataset_url` as you said in your OP)",
"(Yep, `hf_dataset_url` is fine, made a mistake writing the PR description)",
"@albertvillanova Sorry about that, tests are now fixed! :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005357 / 0.011353 (-0.005995) | 0.003788 / 0.011008 (-0.007220) | 0.063630 / 0.038508 (0.025122) | 0.031353 / 0.023109 (0.008244) | 0.247525 / 0.275898 (-0.028373) | 0.282052 / 0.323480 (-0.041428) | 0.004247 / 0.007986 (-0.003739) | 0.002750 / 0.004328 (-0.001579) | 0.049467 / 0.004250 (0.045217) | 0.046663 / 0.037052 (0.009610) | 0.266440 / 0.258489 (0.007951) | 0.295230 / 0.293841 (0.001389) | 0.028271 / 0.128546 (-0.100276) | 0.011116 / 0.075646 (-0.064530) | 0.222092 / 0.419271 (-0.197179) | 0.036627 / 0.043533 (-0.006906) | 0.252607 / 0.255139 (-0.002532) | 0.271231 / 0.283200 (-0.011969) | 0.019070 / 0.141683 (-0.122613) | 1.152645 / 1.452155 (-0.299509) | 1.211267 / 1.492716 (-0.281449) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095002 / 0.018006 (0.076996) | 0.304054 / 0.000490 (0.303564) | 0.000212 / 0.000200 (0.000012) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018251 / 0.037411 (-0.019161) | 0.061929 / 0.014526 (0.047403) | 0.074641 / 0.176557 (-0.101916) | 0.122643 / 0.737135 (-0.614492) | 0.076744 / 0.296338 (-0.219594) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284605 / 0.215209 (0.069396) | 2.774638 / 2.077655 (0.696984) | 1.473907 / 1.504120 (-0.030213) | 1.351054 / 1.541195 (-0.190141) | 1.348840 / 1.468490 (-0.119650) | 0.576243 / 4.584777 (-4.008534) | 2.444110 / 3.745712 (-1.301602) | 2.814741 / 5.269862 (-2.455121) | 1.762666 / 4.565676 (-2.803010) | 0.063959 / 0.424275 (-0.360316) | 0.005011 / 0.007607 (-0.002596) | 0.338406 / 0.226044 (0.112361) | 3.361213 / 2.268929 (1.092284) | 1.832674 / 55.444624 (-53.611950) | 1.564229 / 6.876477 (-5.312248) | 1.570843 / 2.142072 (-0.571230) | 0.657134 / 4.805227 (-4.148093) | 0.120041 / 6.500664 (-6.380623) | 0.048594 / 0.075469 (-0.026875) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965328 / 1.841788 (-0.876460) | 11.704441 / 8.074308 (3.630133) | 9.895462 / 10.191392 (-0.295930) | 0.131913 / 0.680424 (-0.548511) | 0.015175 / 0.534201 (-0.519026) | 0.292022 / 0.579283 (-0.287261) | 0.269752 / 0.434364 (-0.164612) | 0.330453 / 0.540337 (-0.209884) | 0.421659 / 1.386936 (-0.965277) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005472 / 0.011353 (-0.005881) | 0.003809 / 0.011008 (-0.007199) | 0.049594 / 0.038508 (0.011086) | 0.031858 / 0.023109 (0.008748) | 0.277622 / 0.275898 (0.001724) | 0.296092 / 0.323480 (-0.027388) | 0.004209 / 0.007986 (-0.003777) | 0.002726 / 0.004328 (-0.001603) | 0.048057 / 0.004250 (0.043806) | 0.043317 / 0.037052 (0.006265) | 0.288371 / 0.258489 (0.029882) | 0.312847 / 0.293841 (0.019007) | 0.029110 / 0.128546 (-0.099437) | 0.010792 / 0.075646 (-0.064854) | 0.058694 / 0.419271 (-0.360577) | 0.033315 / 0.043533 (-0.010218) | 0.281225 / 0.255139 (0.026086) | 0.297044 / 0.283200 (0.013844) | 0.018897 / 0.141683 (-0.122786) | 1.156417 / 1.452155 (-0.295738) | 1.221393 / 1.492716 (-0.271323) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095065 / 0.018006 (0.077059) | 0.304107 / 0.000490 (0.303618) | 0.000213 / 0.000200 (0.000014) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021658 / 0.037411 (-0.015753) | 0.075948 / 0.014526 (0.061423) | 0.087019 / 0.176557 (-0.089537) | 0.127309 / 0.737135 (-0.609827) | 0.092251 / 0.296338 (-0.204087) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291906 / 0.215209 (0.076697) | 2.865007 / 2.077655 (0.787352) | 1.591647 / 1.504120 (0.087527) | 1.474499 / 1.541195 (-0.066696) | 1.496644 / 1.468490 (0.028154) | 0.575337 / 4.584777 (-4.009440) | 2.569426 / 3.745712 (-1.176287) | 2.872611 / 5.269862 (-2.397251) | 1.804278 / 4.565676 (-2.761399) | 0.064225 / 0.424275 (-0.360050) | 0.005574 / 0.007607 (-0.002033) | 0.347724 / 0.226044 (0.121680) | 3.426418 / 2.268929 (1.157490) | 1.966270 / 55.444624 (-53.478355) | 1.687790 / 6.876477 (-5.188686) | 1.728530 / 2.142072 (-0.413542) | 0.650251 / 4.805227 (-4.154977) | 0.118381 / 6.500664 (-6.382283) | 0.041693 / 0.075469 (-0.033776) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.014203 / 1.841788 (-0.827585) | 12.219496 / 8.074308 (4.145188) | 10.469677 / 10.191392 (0.278285) | 0.141840 / 0.680424 (-0.538584) | 0.015104 / 0.534201 (-0.519097) | 0.288453 / 0.579283 (-0.290830) | 0.287467 / 0.434364 (-0.146897) | 0.331046 / 0.540337 (-0.209292) | 0.423731 / 1.386936 (-0.963205) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#66d6242626eada79cfba4df39d99cd2bacb1cbea \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6761/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6761 | https://github.com/huggingface/datasets/pull/6761 | true |
2,212,288,122 | https://api.github.com/repos/huggingface/datasets/issues/6760/labels{/name} | ### Describe the bug
This happens with datasets-2.18.0; I downgraded the version to 2.14.6 fixing this temporarily.
```
Traceback (most recent call last):
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 2556, in load_dataset
builder_instance = load_dataset_builder(
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 2228, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 1879, in dataset_module_factory
raise e1 from None
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 1831, in dataset_module_factory
can_load_config_from_parquet_export = "DEFAULT_CONFIG_NAME" not in f.read()
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
```
### Steps to reproduce the bug
1. Using Python3.10/3.11
2. Install datasets-2.18.0
3. test with
```
from datasets import load_dataset
dataset = load_dataset("codeparrot/apps")
```
### Expected behavior
Normally it should manage to download and load the dataset without such error.
### Environment info
Ubuntu, Python3.10/3.11 | 2024-04-07T09:40:40Z | 6,760 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-28T03:44:26Z | https://api.github.com/repos/huggingface/datasets/issues/6760/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6760/timeline | Load codeparrot/apps raising UnicodeDecodeError in datasets-2.18.0 | https://api.github.com/repos/huggingface/datasets/issues/6760/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/17897916?v=4",
"events_url": "https://api.github.com/users/yucc-leon/events{/privacy}",
"followers_url": "https://api.github.com/users/yucc-leon/followers",
"following_url": "https://api.github.com/users/yucc-leon/following{/other_user}",
"gists_url": "https://api.github.com/users/yucc-leon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yucc-leon",
"id": 17897916,
"login": "yucc-leon",
"node_id": "MDQ6VXNlcjE3ODk3OTE2",
"organizations_url": "https://api.github.com/users/yucc-leon/orgs",
"received_events_url": "https://api.github.com/users/yucc-leon/received_events",
"repos_url": "https://api.github.com/users/yucc-leon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yucc-leon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yucc-leon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yucc-leon"
} | [] | null | null | NONE | null | null | I_kwDODunzps6D3NZ6 | [
"The same error with mteb datasets.",
"Unfortunately, I'm unable to reproduce this error locally or on Colab.",
"Here is the requirements.txt from a clean virtual environment (managed by conda) where I only install `datasets` by \r\n`pip install datasets`. \r\nThe pip list:\r\n```\r\naiohttp==3.9.3\r\naiosignal==1.3.1\r\nattrs==23.2.0\r\ncertifi==2024.2.2\r\ncharset-normalizer==3.3.2\r\ndatasets==2.18.0\r\ndill==0.3.8\r\nfilelock==3.13.3\r\nfrozenlist==1.4.1\r\nfsspec==2024.2.0\r\nhuggingface-hub==0.22.2\r\nidna==3.6\r\nmultidict==6.0.5\r\nmultiprocess==0.70.16\r\nnumpy==1.26.4\r\npackaging==24.0\r\npandas==2.2.1\r\npyarrow==15.0.2\r\npyarrow-hotfix==0.6\r\npython-dateutil==2.9.0.post0\r\npytz==2024.1\r\nPyYAML==6.0.1\r\nrequests==2.31.0\r\nsix==1.16.0\r\ntqdm==4.66.2\r\ntyping_extensions==4.11.0\r\ntzdata==2024.1\r\nurllib3==2.2.1\r\nxxhash==3.4.1\r\nyarl==1.9.4\r\n```\r\nAnd the error can be reproduced.\r\n\r\nDowngrading to datasets==2.14.6 changes some packages' versions:\r\n\r\n```\r\nSuccessfully installed datasets-2.14.6 dill-0.3.7 fsspec-2023.10.0 multiprocess-0.70.15\r\n```\r\nand the dataset can be downloaded and loaded. \r\n\r\nThen I upgrade the version to 2.18.0 again; now the dataset can be loaded with such a line:\r\n```Using the latest cached version of the module from /home/xxx/.cache/huggingface/modules/datasets_modules/datasets/codeparrot--apps/04ac807715d07d6e5cc580f59cdc8213cd7dc4529d0bb819cca72c9f8e8c1aa5 (last modified on Sun Apr 7 09:06:43 2024) since it couldn't be found locally at codeparrot/apps, or remotely on the Hugging Face Hub. ```\r\n\r\nSo the latest version works wrong when requesting the dataset info. \r\n\r\n**But if you cannot reproduce this, I may ignore some detailed information: I use `HF_ENDPOINT=https://hf-mirror.com` for some reason (if not use this I cannot connect to huggingface resources) and the error occurs when requesting the dataset's info card.** \r\nMaybe the error is caused by this environment variable.\r\nI'll open an issue in the author's repo now."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6760/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6760 | https://github.com/huggingface/datasets/issues/6760 | false |
2,208,892,891 | https://api.github.com/repos/huggingface/datasets/issues/6759/labels{/name} | ### Feature request
Running .map and filter functions with `num_procs` consecutively instantiates several multiprocessing pools iteratively.
As instantiating a Pool is very resource intensive it can be a bottleneck to performing iteratively filtering.
My ideas:
1. There should be an option to declare `persistent_workers` similar to pytorch DataLoader. Downside would be that would be complex to determine the correct resource allocation and deallocation of the pool. i.e. the dataset can outlive the utility of the pool.
2. Provide a pool as an argument. Downside would be the expertise required by the user. Upside, is that there is better resource management.
### Motivation
Is really slow to iteratively perform map and filter operations on a dataset.
### Your contribution
If approved I could integrate it. I would need to know what method would be most suitable to implement from the two options above. | 2024-03-26T17:35:25Z | 6,759 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-03-26T17:35:25Z | https://api.github.com/repos/huggingface/datasets/issues/6759/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6759/timeline | Persistent multi-process Pool | https://api.github.com/repos/huggingface/datasets/issues/6759/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/4337024?v=4",
"events_url": "https://api.github.com/users/fostiropoulos/events{/privacy}",
"followers_url": "https://api.github.com/users/fostiropoulos/followers",
"following_url": "https://api.github.com/users/fostiropoulos/following{/other_user}",
"gists_url": "https://api.github.com/users/fostiropoulos/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fostiropoulos",
"id": 4337024,
"login": "fostiropoulos",
"node_id": "MDQ6VXNlcjQzMzcwMjQ=",
"organizations_url": "https://api.github.com/users/fostiropoulos/orgs",
"received_events_url": "https://api.github.com/users/fostiropoulos/received_events",
"repos_url": "https://api.github.com/users/fostiropoulos/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fostiropoulos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fostiropoulos/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fostiropoulos"
} | [] | null | null | NONE | null | null | I_kwDODunzps6DqQfb | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6759/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6759 | https://github.com/huggingface/datasets/issues/6759 | false |
2,208,494,302 | https://api.github.com/repos/huggingface/datasets/issues/6758/labels{/name} | ### Describe the bug
I have a dataset that consists of a bunch of text files, each representing an example. There is an undocumented `sample_by` argument for the `TextConfig` class that is used by `Text` to decide whether to split files into lines, paragraphs or take them whole. Passing `sample_by=“document”` to `load_dataset` results in files getting split into lines regardless. I have edited `src/datasets/packaged_modules/text/text.py` for myself to switch the default and it works fine.
As a side note, the `if-else` for `sample_by` will silently load an empty dataset if someone makes a typo in the argument, which is not ideal.
### Steps to reproduce the bug
1. Prepare data as a bunch of files in a directory.
2. Load that data via `load_dataset(“text”, data_files=<data_dir>/<files_glob>, …, sample_by=“document”)`.
3. Inspect the resultant dataset — every item should have the form of `{“text”: <a line from a file>}`.
### Expected behavior
`load_dataset(“text”, data_files=<data_dir>/<files_glob>, …, sample_by=“document”)` should result in a dataset with items of the form `{“text”: <one document>}`.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.15.0-1046-nvidia-x86_64-with-glibc2.35
- Python version: 3.11.8
- `huggingface_hub` version: 0.21.4
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0 | 2024-04-08T13:42:35Z | 6,758 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-26T14:55:33Z | https://api.github.com/repos/huggingface/datasets/issues/6758/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | https://api.github.com/repos/huggingface/datasets/issues/6758/timeline | Passing `sample_by` to `load_dataset` when loading text data does not work | https://api.github.com/repos/huggingface/datasets/issues/6758/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/823693?v=4",
"events_url": "https://api.github.com/users/ntoxeg/events{/privacy}",
"followers_url": "https://api.github.com/users/ntoxeg/followers",
"following_url": "https://api.github.com/users/ntoxeg/following{/other_user}",
"gists_url": "https://api.github.com/users/ntoxeg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ntoxeg",
"id": 823693,
"login": "ntoxeg",
"node_id": "MDQ6VXNlcjgyMzY5Mw==",
"organizations_url": "https://api.github.com/users/ntoxeg/orgs",
"received_events_url": "https://api.github.com/users/ntoxeg/received_events",
"repos_url": "https://api.github.com/users/ntoxeg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ntoxeg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ntoxeg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ntoxeg"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | null | NONE | null | null | I_kwDODunzps6DovLe | [
"Thanks for reporting! We are working on a fix."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6758/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6758 | https://github.com/huggingface/datasets/issues/6758 | false |
2,206,280,340 | https://api.github.com/repos/huggingface/datasets/issues/6757/labels{/name} | Related to https://github.com/huggingface/doc-builder/pull/487 and [internal slack thread](https://huggingface.slack.com/archives/C04F8N7FQNL/p1711384899462349?thread_ts=1711041424.720769&cid=C04F8N7FQNL). There is now a `custom_container` option when building docs in CI. When set to `""` (instead of `"huggingface/transformers-doc-builder"` by default), we don't run the CI inside a container, therefore saving ~2min of download time. The plan is to test disabling the transformers container on a few "big" repo and if everything works correctly, we will stop making it the default container. More details on https://github.com/huggingface/doc-builder/pull/487.
cc @mishig25 | 2024-03-27T16:26:35Z | 6,757 | null | https://api.github.com/repos/huggingface/datasets | true | [] | 2024-03-25T17:16:11Z | https://api.github.com/repos/huggingface/datasets/issues/6757/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6757/timeline | Test disabling transformers containers in docs CI | https://api.github.com/repos/huggingface/datasets/issues/6757/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Wauplin",
"id": 11801849,
"login": "Wauplin",
"node_id": "MDQ6VXNlcjExODAxODQ5",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Wauplin"
} | [] | null | null | CONTRIBUTOR | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6757.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6757",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6757.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6757"
} | PR_kwDODunzps5qr7Li | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6757). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"On slack it was mentioned that it was actually slower for `datasets`, should we close this one or am I missing something ?",
"@lhoestq I converted to draft. Want to make some more tests and will let you know"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6757/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6757 | https://github.com/huggingface/datasets/pull/6757 | true |
2,205,557,725 | https://api.github.com/repos/huggingface/datasets/issues/6756/labels{/name} | ### Feature request
Support loading a dataset from a SQLite file
https://huggingface.co/datasets/severo/test_iris_sqlite/tree/main
### Motivation
SQLite is a popular file format.
### Your contribution
See discussion on slack: https://huggingface.slack.com/archives/C04L6P8KNQ5/p1702481859117909 (internal)
In particular: a SQLite file can contain multiple tables, which might be matched to multiple configs. Maybe the detail of splits and configs should be defined in the README YAML, or use the same format as for ZIP files: `Iris.sqlite::Iris`.
See dataset here: https://huggingface.co/datasets/severo/test_iris_sqlite
Note: should we also support DuckDB files? | 2024-03-26T16:09:32Z | 6,756 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-03-25T11:48:05Z | https://api.github.com/repos/huggingface/datasets/issues/6756/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6756/timeline | Support SQLite files? | https://api.github.com/repos/huggingface/datasets/issues/6756/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [] | null | completed | CONTRIBUTOR | 2024-03-26T16:09:32Z | null | I_kwDODunzps6DdiPd | [
"You can use `Dataset.from_sql(path_to_sql_file)` already. Though we haven't added the Sql dataset builder to the `_PACKAGED_DATASETS_MODULES` list or in `_EXTENSION_TO_MODULE` to map `.sqlite` to the Sql dataset builder\r\n\r\nThis would allow to load a dataset repository with a `.sqlite` file using `load_dataset` and enable the Dataset Viewer",
"Considering `Dataset.from_sql`'s (extremely) low usage, I don't think many users are interested in using this format for their datasets. Also, SQLite files are hard/impossible to stream efficiently and require custom logic to define splits/subsets, so IMO we shouldn't encourage people to use SQLite on the Hub.\r\n\r\n@severo Do you have some real-world examples of datasets published in this format?",
"No. Indeed, it seems better to explicitly not support sqlite"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6756/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6756 | https://github.com/huggingface/datasets/issues/6756 | false |
2,204,573,289 | https://api.github.com/repos/huggingface/datasets/issues/6755/labels{/name} | ### Describe the bug
There is a small typo on https://github.com/huggingface/datasets/blob/d5468836fe94e8be1ae093397dd43d4a2503b926/src/datasets/dataset_dict.py#L938
It should be `caching is enabled`.
### Steps to reproduce the bug
Please visit
https://github.com/huggingface/datasets/blob/d5468836fe94e8be1ae093397dd43d4a2503b926/src/datasets/dataset_dict.py#L938
### Expected behavior
`caching is enabled`
### Environment info
- `datasets` version: 2.17.1
- Platform: Linux-5.15.0-101-generic-x86_64-with-glibc2.35
- Python version: 3.11.7
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.1
- `fsspec` version: 2023.10.0 | 2024-04-02T14:01:19Z | 6,755 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | 2024-03-24T21:47:52Z | https://api.github.com/repos/huggingface/datasets/issues/6755/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/63234112?v=4",
"events_url": "https://api.github.com/users/JINO-ROHIT/events{/privacy}",
"followers_url": "https://api.github.com/users/JINO-ROHIT/followers",
"following_url": "https://api.github.com/users/JINO-ROHIT/following{/other_user}",
"gists_url": "https://api.github.com/users/JINO-ROHIT/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JINO-ROHIT",
"id": 63234112,
"login": "JINO-ROHIT",
"node_id": "MDQ6VXNlcjYzMjM0MTEy",
"organizations_url": "https://api.github.com/users/JINO-ROHIT/orgs",
"received_events_url": "https://api.github.com/users/JINO-ROHIT/received_events",
"repos_url": "https://api.github.com/users/JINO-ROHIT/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JINO-ROHIT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JINO-ROHIT/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JINO-ROHIT"
} | https://api.github.com/repos/huggingface/datasets/issues/6755/timeline | Small typo on the documentation | https://api.github.com/repos/huggingface/datasets/issues/6755/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/4337024?v=4",
"events_url": "https://api.github.com/users/fostiropoulos/events{/privacy}",
"followers_url": "https://api.github.com/users/fostiropoulos/followers",
"following_url": "https://api.github.com/users/fostiropoulos/following{/other_user}",
"gists_url": "https://api.github.com/users/fostiropoulos/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fostiropoulos",
"id": 4337024,
"login": "fostiropoulos",
"node_id": "MDQ6VXNlcjQzMzcwMjQ=",
"organizations_url": "https://api.github.com/users/fostiropoulos/orgs",
"received_events_url": "https://api.github.com/users/fostiropoulos/received_events",
"repos_url": "https://api.github.com/users/fostiropoulos/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fostiropoulos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fostiropoulos/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fostiropoulos"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/63234112?v=4",
"events_url": "https://api.github.com/users/JINO-ROHIT/events{/privacy}",
"followers_url": "https://api.github.com/users/JINO-ROHIT/followers",
"following_url": "https://api.github.com/users/JINO-ROHIT/following{/other_user}",
"gists_url": "https://api.github.com/users/JINO-ROHIT/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JINO-ROHIT",
"id": 63234112,
"login": "JINO-ROHIT",
"node_id": "MDQ6VXNlcjYzMjM0MTEy",
"organizations_url": "https://api.github.com/users/JINO-ROHIT/orgs",
"received_events_url": "https://api.github.com/users/JINO-ROHIT/received_events",
"repos_url": "https://api.github.com/users/JINO-ROHIT/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JINO-ROHIT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JINO-ROHIT/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JINO-ROHIT"
}
] | null | completed | NONE | 2024-04-02T14:01:19Z | null | I_kwDODunzps6DZx5p | [
"Thanks for reporting @fostiropoulos! I've edited your comment to fix the link to the problematic line.\r\n",
"@mariosasko can i take this up?",
"#self-assign"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6755/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6755 | https://github.com/huggingface/datasets/issues/6755 | false |
2,204,214,595 | https://api.github.com/repos/huggingface/datasets/issues/6754/labels{/name} | Fix https://github.com/huggingface/datasets/issues/6750#issuecomment-2016678729
I didn't find a guideline on how to run the tests, so i just run the following steps to make sure that this bug is fixed.
1. `python test.py`,
2. then `HF_DATASETS_OFFLINE=1 python test.py`
The `test.py` is
```
import datasets
datasets.utils.logging.set_verbosity_info()
ds = datasets.load_dataset('izhx/STS17-debug')
print(ds)
ds = datasets.load_dataset('C-MTEB/AFQMC', revision='b44c3b011063adb25877c13823db83bb193913c4')
print(ds)
```
| 2024-04-09T01:19:56Z | 6,754 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-24T06:59:15Z | https://api.github.com/repos/huggingface/datasets/issues/6754/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6754/timeline | Fix cache path to snakecase for `CachedDatasetModuleFactory` and `Cache` | https://api.github.com/repos/huggingface/datasets/issues/6754/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/26690193?v=4",
"events_url": "https://api.github.com/users/izhx/events{/privacy}",
"followers_url": "https://api.github.com/users/izhx/followers",
"following_url": "https://api.github.com/users/izhx/following{/other_user}",
"gists_url": "https://api.github.com/users/izhx/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/izhx",
"id": 26690193,
"login": "izhx",
"node_id": "MDQ6VXNlcjI2NjkwMTkz",
"organizations_url": "https://api.github.com/users/izhx/orgs",
"received_events_url": "https://api.github.com/users/izhx/received_events",
"repos_url": "https://api.github.com/users/izhx/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/izhx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/izhx/subscriptions",
"type": "User",
"url": "https://api.github.com/users/izhx"
} | [] | null | null | NONE | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6754.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6754",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6754.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6754"
} | PR_kwDODunzps5qk-nr | [
"@lhoestq hi 😃, is there something else I need to do to check this change?",
"I added two tests and passed them on my server.\r\n\r\n```\r\npytest tests/packaged_modules/test_cache.py \r\n========================================================================== test session starts ==========================================================================\r\nplatform linux -- Python 3.11.5, pytest-8.1.1, pluggy-1.4.0\r\nrootdir: /mnt/nas/datasets\r\nconfigfile: pyproject.toml\r\nplugins: xdist-3.5.0, datadir-1.5.0\r\ncollected 8 items \r\n\r\ntests/packaged_modules/test_cache.py ........ [100%]\r\n\r\n========================================================================== 8 passed in 50.71s ===========================================================================\r\n\r\n```\r\n\r\n```\r\npytest tests/test_load.py\r\n========================================================================== test session starts ==========================================================================\r\nplatform linux -- Python 3.11.5, pytest-8.1.1, pluggy-1.4.0\r\nrootdir: /mnt/nas/datasets\r\nconfigfile: pyproject.toml\r\nplugins: xdist-3.5.0, datadir-1.5.0\r\ncollected 151 items \r\n\r\ntests/test_load.py .............................................................................................................................................. [ 94%]\r\n......... [100%]\r\n\r\n...\r\n\r\n============================================================= 151 passed, 29 warnings in 578.36s (0:09:38) ==============================================================\r\n```\r\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6754). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Hi @izhx! I have also faced this issue, happy to see it already addressed, looking forward for PR merge :)"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6754/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6754 | https://github.com/huggingface/datasets/pull/6754 | true |
2,204,155,091 | https://api.github.com/repos/huggingface/datasets/issues/6753/labels{/name} | ### Describe the bug
When trying to run
```
import datasets
print(datasets.__version__)
```
It generates the following error
```
TypeError: expected string or bytes-like object
```
It looks like It cannot find the valid versions of `fsspec`
though fsspec version is fine when I checked Via command
```
import fsspec
print(fsspec.__version__)
# output: 2024.3.1
```
Detailed crash report
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[1], line 1
----> 1 import datasets
2 print(datasets.__version__)
File /opt/conda/lib/python3.10/site-packages/datasets/__init__.py:18
1 # ruff: noqa
2 # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
3 #
(...)
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
16 __version__ = "2.18.0"
---> 18 from .arrow_dataset import Dataset
19 from .arrow_reader import ReadInstruction
20 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:66
63 from multiprocess import Pool
64 from tqdm.contrib.concurrent import thread_map
---> 66 from . import config
67 from .arrow_reader import ArrowReader
68 from .arrow_writer import ArrowWriter, OptimizedTypedSequence
File /opt/conda/lib/python3.10/site-packages/datasets/config.py:41
39 # Imports
40 DILL_VERSION = version.parse(importlib.metadata.version("dill"))
---> 41 FSSPEC_VERSION = version.parse(importlib.metadata.version("fsspec"))
42 PANDAS_VERSION = version.parse(importlib.metadata.version("pandas"))
43 PYARROW_VERSION = version.parse(importlib.metadata.version("pyarrow"))
File /opt/conda/lib/python3.10/site-packages/packaging/version.py:49, in parse(version)
43 """
44 Parse the given version string and return either a :class:`Version` object
45 or a :class:`LegacyVersion` object depending on if the given version is
46 a valid PEP 440 version or a legacy version.
47 """
48 try:
---> 49 return Version(version)
50 except InvalidVersion:
51 return LegacyVersion(version)
File /opt/conda/lib/python3.10/site-packages/packaging/version.py:264, in Version.__init__(self, version)
261 def __init__(self, version: str) -> None:
262
263 # Validate the version and parse it into pieces
--> 264 match = self._regex.search(version)
265 if not match:
266 raise InvalidVersion(f"Invalid version: '{version}'")
TypeError: expected string or bytes-like object
```
### Steps to reproduce the bug
1. run `!pip install -U datasets` on kaggle
2. check datasets is installed via
```
import datasets
print(datasets.__version__)
```
### Expected behavior
Expected to print datasets version, like `2.18.0`
### Environment info
Running on Kaggle, latest enviornment , here is the notebook https://www.kaggle.com/code/jtv199/mistrial-7b-part2 | 2024-04-04T13:50:35Z | 6,753 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-24T03:01:30Z | https://api.github.com/repos/huggingface/datasets/issues/6753/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6753/timeline | Type error when importing datasets on Kaggle | https://api.github.com/repos/huggingface/datasets/issues/6753/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/18300717?v=4",
"events_url": "https://api.github.com/users/jtv199/events{/privacy}",
"followers_url": "https://api.github.com/users/jtv199/followers",
"following_url": "https://api.github.com/users/jtv199/following{/other_user}",
"gists_url": "https://api.github.com/users/jtv199/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jtv199",
"id": 18300717,
"login": "jtv199",
"node_id": "MDQ6VXNlcjE4MzAwNzE3",
"organizations_url": "https://api.github.com/users/jtv199/orgs",
"received_events_url": "https://api.github.com/users/jtv199/received_events",
"repos_url": "https://api.github.com/users/jtv199/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jtv199/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jtv199/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jtv199"
} | [] | null | completed | NONE | 2024-03-30T00:23:49Z | null | I_kwDODunzps6DYLzT | [
"I have the same problem \r\nIt seems that it only appears when you are using GPU \r\nIt seems to work fine with the 2.17 version though",
"Same here.",
"> I have the same problem\r\n> It seems that it only appears when you are using GPU\r\n> It seems to work fine with the 2.17 version though\r\n\r\nI downgraded from 2.18 to 2.17, and it works with CPU/GPU .. except now pyarrow complains\r\n\r\n```\r\n...\r\nFile /opt/conda/lib/python3.10/site-packages/pyarrow/array.pxi:830, in pyarrow.lib._PandasConvertible.to_pandas()\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/pyarrow/table.pxi:3989, in pyarrow.lib.Table._to_pandas()\r\n\r\nImportError: cannot import name table_to_blockmanager\r\n```\r\n\r\nsee also https://www.kaggle.com/competitions/pii-detection-removal-from-educational-data/discussion/487474#2722594",
"Solved for me by downgrading `!pip install -U datasets==2.16.0` Works with gpu aswell",
"I think you should remain open this issue. It works at the previous version but not the latter versions. It is possible as a bug that the maintainer could take note for.",
"> Solved for me by downgrading `!pip install -U datasets==2.16.0` Works with gpu as well\r\n\r\nVerified it's working w/ GPU if I make these 3 updates.\r\n\r\n```\r\ndatasets==2.16.0\r\nfsspec==2023.10.0\r\ngcsfs==2023.10.0\r\n```\r\n\r\nbut the issue shouldn't be closed, this is just a workaround until they get the issue with 2.18.0 resolved.\r\n\r\nSee also: https://www.kaggle.com/competitions/pii-detection-removal-from-educational-data/discussion/487474",
"> > Solved for me by downgrading `!pip install -U datasets==2.16.0` Works with gpu as well\r\n> \r\n> Verified it's working w/ GPU if I make these 3 updates.\r\n> \r\n> ```\r\n> datasets==2.16.0\r\n> fsspec==2023.10.0\r\n> gcsfs==2023.10.0\r\n> ```\r\n> \r\n> but the issue shouldn't be closed, this is just a workaround until they get the issue with 2.18.0 resolved.\r\n> \r\n> See also: https://www.kaggle.com/competitions/pii-detection-removal-from-educational-data/discussion/487474\r\n\r\nThis also works for me, thanks"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6753/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6753 | https://github.com/huggingface/datasets/issues/6753 | false |
2,204,043,839 | https://api.github.com/repos/huggingface/datasets/issues/6752/labels{/name} | ### Describe the bug
I'm loading a HuggingFace Dataset for images.
I'm running a preprocessing (map operation) step that runs a few operations, one of them being conversion to float16. The Dataset features also say that the 'img' is of type float16. Whenever I take an image from that HuggingFace Dataset instance, the type turns out to be float32.
### Steps to reproduce the bug
```python
import torchvision.transforms.v2 as transforms
from datasets import load_dataset
dataset = load_dataset('cifar10', split='test')
dataset = dataset.with_format("torch")
data_transform = transforms.Compose([transforms.Resize((32, 32)),
transforms.ToDtype(torch.float16, scale=True),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]),
])
def _preprocess(examples):
# Permutes from (BS x H x W x C) to (BS x C x H x W)
images = torch.permute(examples['img'], (0, 3, 2, 1))
examples['img'] = data_transform(images)
return examples
dataset = dataset.map(_preprocess, batched=True, batch_size=8)
```
Now at this point the dataset.features are showing float16 which is great because that's what I want.
```python
print(data_loader.features['img'])
Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float16', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None)
```
But when I try to sample an image from this dataloader; I'm getting a float32 image, when I'm expecting float16:
```python
print(next(iter(data_loader))['img'].dtype)
torch.float32
```
### Expected behavior
I'm expecting the images loaded after the transformation to stay in float16.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.9
- `huggingface_hub` version: 0.21.4
- PyArrow version: 14.0.2
- Pandas version: 2.0.3
- `fsspec` version: 2023.10.0 | 2024-03-23T20:53:56Z | 6,752 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-23T20:53:56Z | https://api.github.com/repos/huggingface/datasets/issues/6752/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6752/timeline | Precision being changed from float16 to float32 unexpectedly | https://api.github.com/repos/huggingface/datasets/issues/6752/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/21228908?v=4",
"events_url": "https://api.github.com/users/gcervantes8/events{/privacy}",
"followers_url": "https://api.github.com/users/gcervantes8/followers",
"following_url": "https://api.github.com/users/gcervantes8/following{/other_user}",
"gists_url": "https://api.github.com/users/gcervantes8/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gcervantes8",
"id": 21228908,
"login": "gcervantes8",
"node_id": "MDQ6VXNlcjIxMjI4OTA4",
"organizations_url": "https://api.github.com/users/gcervantes8/orgs",
"received_events_url": "https://api.github.com/users/gcervantes8/received_events",
"repos_url": "https://api.github.com/users/gcervantes8/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gcervantes8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gcervantes8/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gcervantes8"
} | [] | null | null | NONE | null | null | I_kwDODunzps6DXwo_ | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6752/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6752 | https://github.com/huggingface/datasets/issues/6752 | false |
2,203,951,501 | https://api.github.com/repos/huggingface/datasets/issues/6751/labels{/name} | Some functions in `streaming_download_manager.py` are not closing the file they open which lead to `Unclosed file` warnings in our code. This fixes a few of them. | 2024-03-26T00:40:57Z | 6,751 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-23T16:32:08Z | https://api.github.com/repos/huggingface/datasets/issues/6751/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6751/timeline | Use 'with' operator for some download functions | https://api.github.com/repos/huggingface/datasets/issues/6751/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/31669?v=4",
"events_url": "https://api.github.com/users/Moisan/events{/privacy}",
"followers_url": "https://api.github.com/users/Moisan/followers",
"following_url": "https://api.github.com/users/Moisan/following{/other_user}",
"gists_url": "https://api.github.com/users/Moisan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Moisan",
"id": 31669,
"login": "Moisan",
"node_id": "MDQ6VXNlcjMxNjY5",
"organizations_url": "https://api.github.com/users/Moisan/orgs",
"received_events_url": "https://api.github.com/users/Moisan/received_events",
"repos_url": "https://api.github.com/users/Moisan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Moisan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moisan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Moisan"
} | [] | null | null | NONE | 2024-03-26T00:40:57Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6751.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6751",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6751.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6751"
} | PR_kwDODunzps5qkKLH | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6751). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I was mistaken on the intent of those functions, closing the PR."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6751/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6751 | https://github.com/huggingface/datasets/pull/6751 | true |
2,203,590,658 | https://api.github.com/repos/huggingface/datasets/issues/6750/labels{/name} | ### Describe the bug
Hi all - I see that in the past a network dependency has been mistakenly introduced into `load_dataset` even for local loads. Is it possible this has happened again?
### Steps to reproduce the bug
```
>>> import datasets
>>> datasets.load_dataset("hh-rlhf")
Repo card metadata block was not found. Setting CardData to empty.
*hangs bc i'm firewalled*
````
stack trace from ctrl-c:
```
^CTraceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/load.py", line 2582, in load_dataset
builder_instance.download_and_prepare(
output_path = get_from_cache( [0/122]
File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 532, in get_from_cache
response = http_head(
File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 419, in http_head
response = _request_with_retry(
File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 304, in _request_with_retry
response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/adapters.py", line 487, in send
resp = conn.urlopen(
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connection.py", line 363, in connect
self.sock = conn = self._new_conn()
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
KeyboardInterrupt
```
### Expected behavior
loads the dataset
### Environment info
```
> pip show datasets
Name: datasets
Version: 2.18.0
```
Python 3.10.2 | 2024-04-03T06:50:42Z | 6,750 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-23T01:06:32Z | https://api.github.com/repos/huggingface/datasets/issues/6750/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6750/timeline | `load_dataset` requires a network connection for local download? | https://api.github.com/repos/huggingface/datasets/issues/6750/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/6306695?v=4",
"events_url": "https://api.github.com/users/MiroFurtado/events{/privacy}",
"followers_url": "https://api.github.com/users/MiroFurtado/followers",
"following_url": "https://api.github.com/users/MiroFurtado/following{/other_user}",
"gists_url": "https://api.github.com/users/MiroFurtado/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MiroFurtado",
"id": 6306695,
"login": "MiroFurtado",
"node_id": "MDQ6VXNlcjYzMDY2OTU=",
"organizations_url": "https://api.github.com/users/MiroFurtado/orgs",
"received_events_url": "https://api.github.com/users/MiroFurtado/received_events",
"repos_url": "https://api.github.com/users/MiroFurtado/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MiroFurtado/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MiroFurtado/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MiroFurtado"
} | [] | null | null | NONE | null | null | I_kwDODunzps6DWCAC | [
"Are you using `HF_DATASETS_OFFLINE=1` ?",
"> Are you using `HF_DATASETS_OFFLINE=1` ?\r\n\r\nThis doesn't work for me. `datasets=2.18.0`\r\n\r\n`test.py`:\r\n```\r\nimport datasets\r\n\r\ndatasets.utils.logging.set_verbosity_info()\r\n\r\nds = datasets.load_dataset('C-MTEB/AFQMC', revision='b44c3b011063adb25877c13823db83bb193913c4')\r\n\r\nprint(ds)\r\n```\r\n\r\nrun `python test.py`\r\n```\r\nGenerating dataset afqmc (/home/data/.cache/huggingface/datasets/C-MTEB___afqmc/default/0.0.0/b44c3b011063adb25877c13823db83bb193913c4)\r\nDownloading and preparing dataset afqmc/default to /home/data/.cache/huggingface/datasets/C-MTEB___afqmc/default/0.0.0/b44c3b011063adb25877c13823db83bb193913c4...\r\nDataset not on Hf google storage. Downloading and preparing it from source\r\nhf://datasets/C-MTEB/AFQMC@b44c3b011063adb25877c13823db83bb193913c4/data/validation-00000-of-00001-b8fc393b5ddedac7.parquet not found in cache or force_download set to True, downloading to /home/data/.cache/huggingface/datasets/downloads/78949f93104662359f4f3d5a2f7ec1ae37af5a5af44420a51212ea08c0be966b.incomplete\r\nDownloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240k/240k [00:01<00:00, 178kB/s]\r\nstoring hf://datasets/C-MTEB/AFQMC@b44c3b011063adb25877c13823db83bb193913c4/data/validation-00000-of-00001-b8fc393b5ddedac7.parquet in cache at /home/data/.cache/huggingface/datasets/downloads/78949f93104662359f4f3d5a2f7ec1ae37af5a5af44420a51212ea08c0be966b\r\ncreating metadata file for /home/data/.cache/huggingface/datasets/downloads/78949f93104662359f4f3d5a2f7ec1ae37af5a5af44420a51212ea08c0be966b\r\nDownloading took 0.0 min\r\nChecksum Computation took 0.0 min\r\nGenerating test split\r\nGenerating test split: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3861/3861 [00:00<00:00, 3972.00 examples/s]\r\nGenerating train split\r\nGenerating train split: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 34334/34334 [00:00<00:00, 34355.50 examples/s]\r\nGenerating validation split\r\nGenerating validation split: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4316/4316 [00:00<00:00, 4477.00 examples/s]\r\nAll the splits matched successfully.\r\nDataset afqmc downloaded and prepared to /home/data/.cache/huggingface/datasets/C-MTEB___afqmc/default/0.0.0/b44c3b011063adb25877c13823db83bb193913c4. Subsequent calls will reuse this data.\r\nDatasetDict({\r\n test: Dataset({\r\n features: ['sentence1', 'sentence2', 'score', 'idx'],\r\n num_rows: 3861\r\n })\r\n train: Dataset({\r\n features: ['sentence1', 'sentence2', 'score', 'idx'],\r\n num_rows: 34334\r\n })\r\n validation: Dataset({\r\n features: ['sentence1', 'sentence2', 'score', 'idx'],\r\n num_rows: 4316\r\n })\r\n})\r\n```\r\n\r\nThen run `HF_DATASETS_OFFLINE=1 python test.py`\r\n```\r\nTraceback (most recent call last):\r\n File \"test.py\", line 9, in <module>\r\n ds = datasets.load_dataset('C-MTEB/AFQMC', revision='b44c3b011063adb25877c13823db83bb193913c4')\r\n File \"/dev/shm/tmp_env/lib/python3.10/site-packages/datasets/load.py\", line 2556, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/dev/shm/tmp_env/lib/python3.10/site-packages/datasets/load.py\", line 2228, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"/dev/shm/tmp_env/lib/python3.10/site-packages/datasets/load.py\", line 1871, in dataset_module_factory\r\n raise ConnectionError(f\"Couldn't reach the Hugging Face Hub for dataset '{path}': {e1}\") from None\r\nConnectionError: Couldn't reach the Hugging Face Hub for dataset 'C-MTEB/AFQMC': Offline mode is enabled.\r\n```\r\n\r\n",
"I was having similar inexplicable issues.\r\n\r\nDoing this I *think* helped, but, `datasets` still *clearly* does not want to respect the cache:\r\n\r\n```python\r\npip install --upgrade datasets # now it is 2.18.0\r\nHF_DATASETS_OFFLINE=\"1\" python blah.py\r\n```\r\n\r\nOr similarly, I must spacify that env var to resuse the cache, IE, no arg to `load_dataset` helps it reuse the cache:\r\n\r\n```python\r\n\r\nimport os\r\nos.environ[\"HF_DATASETS_OFFLINE\"] = \"1\"\r\n\r\nimport logging\r\nlogging.basicConfig(level=logging.DEBUG)\r\n\r\nimport datasets\r\n# >>> datasets.__version__\r\n# '2.18.0'\r\n\r\ndatasets.utils.logging.set_verbosity_info()\r\ndata = datasets.load_dataset(\"c-s-ale/dolly-15k-instruction-alpaca-format\")\r\n```"
] | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6750/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6750 | https://github.com/huggingface/datasets/issues/6750 | false |
2,202,310,116 | https://api.github.com/repos/huggingface/datasets/issues/6749/labels{/name} | Following changes at https://github.com/fsspec/filesystem_spec/pull/1497 for `fsspec>=2024.2.0` | 2024-03-22T14:51:45Z | 6,749 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-22T11:44:11Z | https://api.github.com/repos/huggingface/datasets/issues/6749/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6749/timeline | Fix fsspec tqdm callback | https://api.github.com/repos/huggingface/datasets/issues/6749/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2024-03-22T14:45:39Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6749.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6749",
"merged_at": "2024-03-22T14:45:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6749.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6749"
} | PR_kwDODunzps5qeoSk | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6749). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005017 / 0.011353 (-0.006336) | 0.002958 / 0.011008 (-0.008050) | 0.063455 / 0.038508 (0.024946) | 0.028206 / 0.023109 (0.005096) | 0.230884 / 0.275898 (-0.045014) | 0.252688 / 0.323480 (-0.070792) | 0.002995 / 0.007986 (-0.004991) | 0.002613 / 0.004328 (-0.001716) | 0.046477 / 0.004250 (0.042226) | 0.040662 / 0.037052 (0.003609) | 0.241824 / 0.258489 (-0.016665) | 0.269063 / 0.293841 (-0.024778) | 0.027336 / 0.128546 (-0.101210) | 0.010614 / 0.075646 (-0.065032) | 0.216087 / 0.419271 (-0.203184) | 0.035667 / 0.043533 (-0.007866) | 0.238657 / 0.255139 (-0.016482) | 0.253433 / 0.283200 (-0.029767) | 0.017433 / 0.141683 (-0.124250) | 1.120856 / 1.452155 (-0.331299) | 1.157415 / 1.492716 (-0.335302) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088028 / 0.018006 (0.070022) | 0.277368 / 0.000490 (0.276878) | 0.000204 / 0.000200 (0.000004) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017956 / 0.037411 (-0.019455) | 0.061061 / 0.014526 (0.046535) | 0.073323 / 0.176557 (-0.103234) | 0.119254 / 0.737135 (-0.617881) | 0.074308 / 0.296338 (-0.222031) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285118 / 0.215209 (0.069908) | 2.785796 / 2.077655 (0.708142) | 1.476436 / 1.504120 (-0.027684) | 1.356505 / 1.541195 (-0.184690) | 1.362505 / 1.468490 (-0.105985) | 0.554064 / 4.584777 (-4.030713) | 2.395774 / 3.745712 (-1.349938) | 2.713703 / 5.269862 (-2.556159) | 1.701020 / 4.565676 (-2.864657) | 0.062370 / 0.424275 (-0.361905) | 0.004944 / 0.007607 (-0.002663) | 0.327948 / 0.226044 (0.101904) | 3.243739 / 2.268929 (0.974811) | 1.803881 / 55.444624 (-53.640743) | 1.551635 / 6.876477 (-5.324841) | 1.560627 / 2.142072 (-0.581446) | 0.628187 / 4.805227 (-4.177040) | 0.115824 / 6.500664 (-6.384840) | 0.041655 / 0.075469 (-0.033814) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.968797 / 1.841788 (-0.872991) | 11.220905 / 8.074308 (3.146597) | 9.322584 / 10.191392 (-0.868808) | 0.139629 / 0.680424 (-0.540795) | 0.013823 / 0.534201 (-0.520378) | 0.286700 / 0.579283 (-0.292583) | 0.263517 / 0.434364 (-0.170847) | 0.341264 / 0.540337 (-0.199074) | 0.418834 / 1.386936 (-0.968102) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005404 / 0.011353 (-0.005949) | 0.003630 / 0.011008 (-0.007378) | 0.048977 / 0.038508 (0.010469) | 0.029980 / 0.023109 (0.006871) | 0.274671 / 0.275898 (-0.001227) | 0.295671 / 0.323480 (-0.027808) | 0.004230 / 0.007986 (-0.003756) | 0.002656 / 0.004328 (-0.001672) | 0.048603 / 0.004250 (0.044353) | 0.044323 / 0.037052 (0.007271) | 0.286499 / 0.258489 (0.028010) | 0.313199 / 0.293841 (0.019358) | 0.030079 / 0.128546 (-0.098468) | 0.010480 / 0.075646 (-0.065166) | 0.058226 / 0.419271 (-0.361045) | 0.054920 / 0.043533 (0.011387) | 0.274921 / 0.255139 (0.019783) | 0.296559 / 0.283200 (0.013360) | 0.019164 / 0.141683 (-0.122519) | 1.154703 / 1.452155 (-0.297452) | 1.207015 / 1.492716 (-0.285701) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089368 / 0.018006 (0.071362) | 0.301196 / 0.000490 (0.300706) | 0.000208 / 0.000200 (0.000008) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021355 / 0.037411 (-0.016056) | 0.074688 / 0.014526 (0.060162) | 0.085840 / 0.176557 (-0.090716) | 0.125784 / 0.737135 (-0.611351) | 0.087103 / 0.296338 (-0.209235) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296727 / 0.215209 (0.081518) | 2.884922 / 2.077655 (0.807267) | 1.586515 / 1.504120 (0.082395) | 1.474417 / 1.541195 (-0.066777) | 1.492105 / 1.468490 (0.023615) | 0.570016 / 4.584777 (-4.014761) | 2.435760 / 3.745712 (-1.309952) | 2.657999 / 5.269862 (-2.611863) | 1.740160 / 4.565676 (-2.825516) | 0.063743 / 0.424275 (-0.360532) | 0.005048 / 0.007607 (-0.002559) | 0.341279 / 0.226044 (0.115235) | 3.396185 / 2.268929 (1.127256) | 1.952825 / 55.444624 (-53.491800) | 1.676669 / 6.876477 (-5.199808) | 1.773158 / 2.142072 (-0.368915) | 0.650664 / 4.805227 (-4.154563) | 0.116815 / 6.500664 (-6.383849) | 0.040813 / 0.075469 (-0.034656) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.999836 / 1.841788 (-0.841952) | 11.854540 / 8.074308 (3.780232) | 10.245516 / 10.191392 (0.054124) | 0.141235 / 0.680424 (-0.539189) | 0.015562 / 0.534201 (-0.518639) | 0.287556 / 0.579283 (-0.291727) | 0.274946 / 0.434364 (-0.159418) | 0.324652 / 0.540337 (-0.215685) | 0.449204 / 1.386936 (-0.937733) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ed2b406d045349dad16738985c947fe743260710 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6749/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6749 | https://github.com/huggingface/datasets/pull/6749 | true |
2,201,517,348 | https://api.github.com/repos/huggingface/datasets/issues/6748/labels{/name} | ### Describe the bug
I have loaded a dataset, and then slice first 300 samples using `:` ops, however, the resulting dataset is not expected, as the output below:
```bash
len(dataset)=1050324
len(dataset[:300])=2
len(dataset[0:300])=2
len(dataset.select(range(300)))=300
```
### Steps to reproduce the bug
load a dataset then:
```bash
dataset = load_from_disk(args.train_data_dir)
print(f"{len(dataset)=}", flush=True)
print(f"{len(dataset[:300])=}", flush=True)
print(f"{len(dataset[0:300])=}", flush=True)
print(f"{len(dataset.select(range(300)))=}", flush=True)
```
### Expected behavior
```bash
len(dataset)=1050324
len(dataset[:300])=300
len(dataset[0:300])=300
len(dataset.select(range(300)))=300
```
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- `huggingface_hub` version: 0.20.2
- PyArrow version: 10.0.1
- Pandas version: 1.5.3
- `fsspec` version: 2023.10.0 | 2024-03-22T16:43:57Z | 6,748 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-22T01:49:13Z | https://api.github.com/repos/huggingface/datasets/issues/6748/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6748/timeline | Strange slicing behavior | https://api.github.com/repos/huggingface/datasets/issues/6748/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/20135317?v=4",
"events_url": "https://api.github.com/users/Luciennnnnnn/events{/privacy}",
"followers_url": "https://api.github.com/users/Luciennnnnnn/followers",
"following_url": "https://api.github.com/users/Luciennnnnnn/following{/other_user}",
"gists_url": "https://api.github.com/users/Luciennnnnnn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Luciennnnnnn",
"id": 20135317,
"login": "Luciennnnnnn",
"node_id": "MDQ6VXNlcjIwMTM1MzE3",
"organizations_url": "https://api.github.com/users/Luciennnnnnn/orgs",
"received_events_url": "https://api.github.com/users/Luciennnnnnn/received_events",
"repos_url": "https://api.github.com/users/Luciennnnnnn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Luciennnnnnn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Luciennnnnnn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Luciennnnnnn"
} | [] | null | null | NONE | null | null | I_kwDODunzps6DOH0k | [
"As explained in the [docs](https://huggingface.co/docs/datasets/v2.18.0/en/access#slicing), slicing a `Dataset` returns a dictionary that maps its column names to their values. So, `len(dataset[:300])=2` is expected, assuming your dataset has 2 columns (the returned dict has 2 keys, but each value in the dict has 300 items).\r\n` "
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6748/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6748 | https://github.com/huggingface/datasets/issues/6748 | false |
2,201,219,384 | https://api.github.com/repos/huggingface/datasets/issues/6747/labels{/name} | There were a few fixes released recently, some DVC ecosystem packages require newer version of `fsspec`. | 2024-03-22T16:40:15Z | 6,747 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-21T21:25:49Z | https://api.github.com/repos/huggingface/datasets/issues/6747/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6747/timeline | chore(deps): bump fsspec | https://api.github.com/repos/huggingface/datasets/issues/6747/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/3659196?v=4",
"events_url": "https://api.github.com/users/shcheklein/events{/privacy}",
"followers_url": "https://api.github.com/users/shcheklein/followers",
"following_url": "https://api.github.com/users/shcheklein/following{/other_user}",
"gists_url": "https://api.github.com/users/shcheklein/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shcheklein",
"id": 3659196,
"login": "shcheklein",
"node_id": "MDQ6VXNlcjM2NTkxOTY=",
"organizations_url": "https://api.github.com/users/shcheklein/orgs",
"received_events_url": "https://api.github.com/users/shcheklein/received_events",
"repos_url": "https://api.github.com/users/shcheklein/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shcheklein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shcheklein/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shcheklein"
} | [] | null | null | CONTRIBUTOR | 2024-03-22T16:28:40Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6747.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6747",
"merged_at": "2024-03-22T16:28:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6747.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6747"
} | PR_kwDODunzps5qa5L- | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6747). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005129 / 0.011353 (-0.006224) | 0.003788 / 0.011008 (-0.007220) | 0.063456 / 0.038508 (0.024948) | 0.029079 / 0.023109 (0.005969) | 0.237228 / 0.275898 (-0.038670) | 0.260554 / 0.323480 (-0.062926) | 0.003090 / 0.007986 (-0.004895) | 0.002730 / 0.004328 (-0.001599) | 0.049040 / 0.004250 (0.044789) | 0.042432 / 0.037052 (0.005380) | 0.256954 / 0.258489 (-0.001535) | 0.285912 / 0.293841 (-0.007929) | 0.027568 / 0.128546 (-0.100978) | 0.010402 / 0.075646 (-0.065245) | 0.206773 / 0.419271 (-0.212499) | 0.035381 / 0.043533 (-0.008152) | 0.243147 / 0.255139 (-0.011992) | 0.259419 / 0.283200 (-0.023781) | 0.019503 / 0.141683 (-0.122180) | 1.145537 / 1.452155 (-0.306618) | 1.204070 / 1.492716 (-0.288646) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092298 / 0.018006 (0.074291) | 0.300042 / 0.000490 (0.299553) | 0.000236 / 0.000200 (0.000036) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018624 / 0.037411 (-0.018788) | 0.063832 / 0.014526 (0.049306) | 0.075849 / 0.176557 (-0.100707) | 0.120919 / 0.737135 (-0.616216) | 0.075878 / 0.296338 (-0.220461) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275545 / 0.215209 (0.060336) | 2.706004 / 2.077655 (0.628349) | 1.406398 / 1.504120 (-0.097722) | 1.287154 / 1.541195 (-0.254041) | 1.298278 / 1.468490 (-0.170212) | 0.559763 / 4.584777 (-4.025014) | 2.434104 / 3.745712 (-1.311608) | 2.786338 / 5.269862 (-2.483523) | 1.720951 / 4.565676 (-2.844726) | 0.062082 / 0.424275 (-0.362193) | 0.004931 / 0.007607 (-0.002676) | 0.329998 / 0.226044 (0.103954) | 3.222105 / 2.268929 (0.953176) | 1.777539 / 55.444624 (-53.667085) | 1.533845 / 6.876477 (-5.342632) | 1.520357 / 2.142072 (-0.621715) | 0.638850 / 4.805227 (-4.166377) | 0.116718 / 6.500664 (-6.383946) | 0.042215 / 0.075469 (-0.033254) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962791 / 1.841788 (-0.878997) | 11.509889 / 8.074308 (3.435581) | 9.507676 / 10.191392 (-0.683716) | 0.140780 / 0.680424 (-0.539644) | 0.014187 / 0.534201 (-0.520014) | 0.286363 / 0.579283 (-0.292920) | 0.263316 / 0.434364 (-0.171048) | 0.322099 / 0.540337 (-0.218239) | 0.415602 / 1.386936 (-0.971334) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005175 / 0.011353 (-0.006178) | 0.003631 / 0.011008 (-0.007377) | 0.050277 / 0.038508 (0.011769) | 0.031879 / 0.023109 (0.008770) | 0.269966 / 0.275898 (-0.005933) | 0.297229 / 0.323480 (-0.026251) | 0.004278 / 0.007986 (-0.003707) | 0.002936 / 0.004328 (-0.001393) | 0.048686 / 0.004250 (0.044436) | 0.044262 / 0.037052 (0.007209) | 0.284578 / 0.258489 (0.026089) | 0.313681 / 0.293841 (0.019840) | 0.029064 / 0.128546 (-0.099482) | 0.010700 / 0.075646 (-0.064946) | 0.058366 / 0.419271 (-0.360905) | 0.051341 / 0.043533 (0.007809) | 0.271262 / 0.255139 (0.016123) | 0.290791 / 0.283200 (0.007591) | 0.019044 / 0.141683 (-0.122639) | 1.149514 / 1.452155 (-0.302641) | 1.209277 / 1.492716 (-0.283439) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094879 / 0.018006 (0.076872) | 0.302196 / 0.000490 (0.301707) | 0.000217 / 0.000200 (0.000018) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021715 / 0.037411 (-0.015696) | 0.075122 / 0.014526 (0.060596) | 0.087393 / 0.176557 (-0.089164) | 0.125583 / 0.737135 (-0.611553) | 0.088722 / 0.296338 (-0.207617) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295158 / 0.215209 (0.079949) | 2.930208 / 2.077655 (0.852553) | 1.590197 / 1.504120 (0.086077) | 1.459038 / 1.541195 (-0.082156) | 1.471690 / 1.468490 (0.003200) | 0.570279 / 4.584777 (-4.014498) | 2.456971 / 3.745712 (-1.288741) | 2.675315 / 5.269862 (-2.594547) | 1.750122 / 4.565676 (-2.815554) | 0.062905 / 0.424275 (-0.361370) | 0.005118 / 0.007607 (-0.002489) | 0.344263 / 0.226044 (0.118219) | 3.472460 / 2.268929 (1.203532) | 1.931707 / 55.444624 (-53.512917) | 1.658537 / 6.876477 (-5.217939) | 1.785794 / 2.142072 (-0.356278) | 0.637149 / 4.805227 (-4.168078) | 0.115838 / 6.500664 (-6.384826) | 0.040771 / 0.075469 (-0.034698) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.002869 / 1.841788 (-0.838919) | 12.048825 / 8.074308 (3.974517) | 10.407979 / 10.191392 (0.216587) | 0.150300 / 0.680424 (-0.530124) | 0.015299 / 0.534201 (-0.518902) | 0.286277 / 0.579283 (-0.293006) | 0.312186 / 0.434364 (-0.122178) | 0.322633 / 0.540337 (-0.217704) | 0.438431 / 1.386936 (-0.948505) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d5468836fe94e8be1ae093397dd43d4a2503b926 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6747/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6747 | https://github.com/huggingface/datasets/pull/6747 | true |
2,198,993,949 | https://api.github.com/repos/huggingface/datasets/issues/6746/labels{/name} | ### Describe the bug
I encounter bug when running the example command line
```python
python main.py \
--model decapoda-research/llama-7b-hf \
--prune_method wanda \
--sparsity_ratio 0.5 \
--sparsity_type unstructured \
--save out/llama_7b/unstructured/wanda/
```
The bug occurred at these lines of code (when loading c4 dataset)
```python
traindata = load_dataset('allenai/c4', 'allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train')
valdata = load_dataset('allenai/c4', 'allenai--c4', data_files={'validation': 'en/c4-validation.00000-of-00008.json.gz'}, split='validation')
```
The error message states:
```
raise ExpectedMoreSplits(str(set(expected_splits) - set(recorded_splits)))
datasets.utils.info_utils.ExpectedMoreSplits: {'validation'}
```
### Steps to reproduce the bug
1. I encounter bug when running the example command line
### Expected behavior
The error message states:
```
raise ExpectedMoreSplits(str(set(expected_splits) - set(recorded_splits)))
datasets.utils.info_utils.ExpectedMoreSplits: {'validation'}
```
### Environment info
I'm using cuda 12.4, so I use ```pip install pytorch``` instead of conda provided in install.md
Also, I've tried another environment using the same commands in install.md, but the same bug occured | 2024-04-09T07:30:56Z | 6,746 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-21T02:53:04Z | https://api.github.com/repos/huggingface/datasets/issues/6746/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6746/timeline | ExpectedMoreSplits error when loading C4 dataset | https://api.github.com/repos/huggingface/datasets/issues/6746/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/65165345?v=4",
"events_url": "https://api.github.com/users/billwang485/events{/privacy}",
"followers_url": "https://api.github.com/users/billwang485/followers",
"following_url": "https://api.github.com/users/billwang485/following{/other_user}",
"gists_url": "https://api.github.com/users/billwang485/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/billwang485",
"id": 65165345,
"login": "billwang485",
"node_id": "MDQ6VXNlcjY1MTY1MzQ1",
"organizations_url": "https://api.github.com/users/billwang485/orgs",
"received_events_url": "https://api.github.com/users/billwang485/received_events",
"repos_url": "https://api.github.com/users/billwang485/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/billwang485/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/billwang485/subscriptions",
"type": "User",
"url": "https://api.github.com/users/billwang485"
} | [] | null | null | NONE | null | null | I_kwDODunzps6DEfwd | [
"Hi ! We updated the `allenai/c4` repository to allow people to specify which language to load easily (the the [c4 dataset page](https://huggingface.co/datasets/allenai/c4))\r\n\r\nTo fix this issue you can update `datasets` and remove the mention of the legacy configuration name \"allenai--c4\":\r\n\r\n```python\r\ntraindata = load_dataset('allenai/c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train')\r\nvaldata = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00000-of-00008.json.gz'}, split='validation')\r\n```",
"Did you solve this problem?I have the same bug.It is no use to delete \"allenai--c4\".",
"Did you solve it? I met this problem too."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6746/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6746 | https://github.com/huggingface/datasets/issues/6746 | false |
2,198,541,732 | https://api.github.com/repos/huggingface/datasets/issues/6745/labels{/name} | ### Feature request
https://github.com/bigcode-project/opt-out-v2 - opt out is not consent. kindly quit this ridiculous nonsense.
### Motivation
[EDITED: insults not tolerated]
### Your contribution
[EDITED: insults not tolerated] | 2024-03-21T12:28:04Z | 6,745 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-03-20T20:54:06Z | https://api.github.com/repos/huggingface/datasets/issues/6745/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6745/timeline | Scraping the whole of github including private repos is bad; kindly stop | https://api.github.com/repos/huggingface/datasets/issues/6745/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghost",
"id": 10137,
"login": "ghost",
"node_id": "MDQ6VXNlcjEwMTM3",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"repos_url": "https://api.github.com/users/ghost/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghost"
} | [] | null | completed | NONE | 2024-03-21T10:24:56Z | null | I_kwDODunzps6DCxWk | [
"It's not twitter here"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6745/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6745 | https://github.com/huggingface/datasets/issues/6745 | false |
2,197,910,168 | https://api.github.com/repos/huggingface/datasets/issues/6744/labels{/name} | ### Feature request
Commands such as `load_dataset` creates file locks with `filelock.FileLock`. It would be good if there was a way to disable this.
### Motivation
File locking doesn't work on all file-systems (in my case NFS mounted Weka). If the `cache_dir` only had small files then it would be possible to point to local disk and the problem would be solved. However, as cache_dir is both where the small info files are written and the processed datasets are put this isn't a feasible solution.
Considering https://github.com/huggingface/datasets/issues/6395 I still do think this is something that belongs in HuggingFace. The possibility to control packages separately is valuable. It might be that a user has their dataset on a file-system that doesn't support file-locking while they are using file locking on local disk to control some other type of access.
### Your contribution
My suggested solution:
```
diff --git a/src/datasets/utils/_filelock.py b/src/datasets/utils/_filelock.py
index 19620e6e..58f41a02 100644
--- a/src/datasets/utils/_filelock.py
+++ b/src/datasets/utils/_filelock.py
@@ -18,11 +18,15 @@
import os
from filelock import FileLock as FileLock_
-from filelock import UnixFileLock
+from filelock import SoftFileLock, UnixFileLock
from filelock import __version__ as _filelock_version
from packaging import version
+if os.getenv('HF_USE_SOFTFILELOCK', 'false').lower() in ('true', '1'):
+ FileLock_ = SoftFileLock
+
+
class FileLock(FileLock_):
"""
A `filelock.FileLock` initializer that handles long paths.
```
| 2024-03-20T15:59:45Z | 6,744 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-03-20T15:59:45Z | https://api.github.com/repos/huggingface/datasets/issues/6744/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6744/timeline | Option to disable file locking | https://api.github.com/repos/huggingface/datasets/issues/6744/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/35767167?v=4",
"events_url": "https://api.github.com/users/VRehnberg/events{/privacy}",
"followers_url": "https://api.github.com/users/VRehnberg/followers",
"following_url": "https://api.github.com/users/VRehnberg/following{/other_user}",
"gists_url": "https://api.github.com/users/VRehnberg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VRehnberg",
"id": 35767167,
"login": "VRehnberg",
"node_id": "MDQ6VXNlcjM1NzY3MTY3",
"organizations_url": "https://api.github.com/users/VRehnberg/orgs",
"received_events_url": "https://api.github.com/users/VRehnberg/received_events",
"repos_url": "https://api.github.com/users/VRehnberg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VRehnberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VRehnberg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VRehnberg"
} | [] | null | null | NONE | null | null | I_kwDODunzps6DAXKY | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6744/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6744 | https://github.com/huggingface/datasets/issues/6744 | false |
2,195,481,697 | https://api.github.com/repos/huggingface/datasets/issues/6743/labels{/name} | Fix #6738 | 2024-04-08T13:08:42Z | 6,743 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-19T16:54:22Z | https://api.github.com/repos/huggingface/datasets/issues/6743/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6743/timeline | Allow null values in dict columns | https://api.github.com/repos/huggingface/datasets/issues/6743/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2024-03-19T20:05:19Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6743.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6743",
"merged_at": "2024-03-19T20:05:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6743.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6743"
} | PR_kwDODunzps5qHeMZ | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6743). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005013 / 0.011353 (-0.006340) | 0.003228 / 0.011008 (-0.007780) | 0.062763 / 0.038508 (0.024255) | 0.028937 / 0.023109 (0.005828) | 0.240777 / 0.275898 (-0.035121) | 0.266972 / 0.323480 (-0.056508) | 0.003073 / 0.007986 (-0.004913) | 0.002769 / 0.004328 (-0.001560) | 0.049265 / 0.004250 (0.045015) | 0.042061 / 0.037052 (0.005009) | 0.261714 / 0.258489 (0.003225) | 0.284896 / 0.293841 (-0.008944) | 0.027717 / 0.128546 (-0.100829) | 0.010430 / 0.075646 (-0.065216) | 0.209022 / 0.419271 (-0.210249) | 0.035941 / 0.043533 (-0.007591) | 0.246849 / 0.255139 (-0.008290) | 0.263205 / 0.283200 (-0.019994) | 0.019489 / 0.141683 (-0.122193) | 1.102595 / 1.452155 (-0.349559) | 1.170493 / 1.492716 (-0.322223) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093611 / 0.018006 (0.075604) | 0.302041 / 0.000490 (0.301551) | 0.000223 / 0.000200 (0.000023) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018720 / 0.037411 (-0.018692) | 0.062199 / 0.014526 (0.047673) | 0.074888 / 0.176557 (-0.101669) | 0.120184 / 0.737135 (-0.616951) | 0.076756 / 0.296338 (-0.219583) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287484 / 0.215209 (0.072275) | 2.787777 / 2.077655 (0.710123) | 1.488957 / 1.504120 (-0.015163) | 1.362678 / 1.541195 (-0.178517) | 1.364571 / 1.468490 (-0.103919) | 0.563139 / 4.584777 (-4.021638) | 2.422224 / 3.745712 (-1.323488) | 2.798011 / 5.269862 (-2.471850) | 1.751159 / 4.565676 (-2.814517) | 0.062740 / 0.424275 (-0.361536) | 0.004918 / 0.007607 (-0.002689) | 0.338285 / 0.226044 (0.112240) | 3.316012 / 2.268929 (1.047083) | 1.845975 / 55.444624 (-53.598650) | 1.553187 / 6.876477 (-5.323290) | 1.564582 / 2.142072 (-0.577490) | 0.645987 / 4.805227 (-4.159240) | 0.118216 / 6.500664 (-6.382448) | 0.041243 / 0.075469 (-0.034226) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.970265 / 1.841788 (-0.871522) | 11.783152 / 8.074308 (3.708844) | 9.516584 / 10.191392 (-0.674808) | 0.148086 / 0.680424 (-0.532338) | 0.013689 / 0.534201 (-0.520512) | 0.289657 / 0.579283 (-0.289626) | 0.265966 / 0.434364 (-0.168398) | 0.328483 / 0.540337 (-0.211854) | 0.433544 / 1.386936 (-0.953392) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005235 / 0.011353 (-0.006118) | 0.003515 / 0.011008 (-0.007493) | 0.049484 / 0.038508 (0.010976) | 0.029264 / 0.023109 (0.006154) | 0.278518 / 0.275898 (0.002620) | 0.298948 / 0.323480 (-0.024532) | 0.004308 / 0.007986 (-0.003678) | 0.002751 / 0.004328 (-0.001577) | 0.048952 / 0.004250 (0.044701) | 0.045379 / 0.037052 (0.008327) | 0.292633 / 0.258489 (0.034144) | 0.319405 / 0.293841 (0.025564) | 0.030201 / 0.128546 (-0.098345) | 0.010657 / 0.075646 (-0.064990) | 0.057842 / 0.419271 (-0.361430) | 0.053359 / 0.043533 (0.009826) | 0.281136 / 0.255139 (0.025997) | 0.295388 / 0.283200 (0.012188) | 0.018786 / 0.141683 (-0.122897) | 1.187181 / 1.452155 (-0.264974) | 1.198394 / 1.492716 (-0.294323) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093861 / 0.018006 (0.075855) | 0.304019 / 0.000490 (0.303529) | 0.000220 / 0.000200 (0.000020) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021582 / 0.037411 (-0.015829) | 0.075381 / 0.014526 (0.060855) | 0.087886 / 0.176557 (-0.088671) | 0.125078 / 0.737135 (-0.612057) | 0.089339 / 0.296338 (-0.206999) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295797 / 0.215209 (0.080588) | 2.912021 / 2.077655 (0.834367) | 1.592191 / 1.504120 (0.088071) | 1.471270 / 1.541195 (-0.069925) | 1.475535 / 1.468490 (0.007045) | 0.564114 / 4.584777 (-4.020663) | 2.442882 / 3.745712 (-1.302830) | 2.679433 / 5.269862 (-2.590428) | 1.752097 / 4.565676 (-2.813579) | 0.062748 / 0.424275 (-0.361527) | 0.005068 / 0.007607 (-0.002539) | 0.345554 / 0.226044 (0.119509) | 3.456929 / 2.268929 (1.188000) | 1.962781 / 55.444624 (-53.481844) | 1.688313 / 6.876477 (-5.188164) | 1.817392 / 2.142072 (-0.324681) | 0.639588 / 4.805227 (-4.165639) | 0.116148 / 6.500664 (-6.384516) | 0.040851 / 0.075469 (-0.034618) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.009852 / 1.841788 (-0.831936) | 12.031749 / 8.074308 (3.957440) | 10.305107 / 10.191392 (0.113715) | 0.132960 / 0.680424 (-0.547464) | 0.014779 / 0.534201 (-0.519422) | 0.288903 / 0.579283 (-0.290381) | 0.275417 / 0.434364 (-0.158947) | 0.322628 / 0.540337 (-0.217709) | 0.445060 / 1.386936 (-0.941876) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f234fce40d5ffc96fac5198d8cc89817970d87ee \"CML watermark\")\n",
"notify https://huggingface.co/datasets/chaoyi-wu/PMC-Inline/discussions/1 once it's merged in dataset-viewer"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6743/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6743 | https://github.com/huggingface/datasets/pull/6743 | true |
2,195,134,854 | https://api.github.com/repos/huggingface/datasets/issues/6742/labels{/name} | Reported in https://github.com/huggingface/datasets-server/issues/2607 | 2024-03-19T18:24:39Z | 6,742 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-19T14:29:25Z | https://api.github.com/repos/huggingface/datasets/issues/6742/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6742/timeline | Fix missing download_config in get_data_patterns | https://api.github.com/repos/huggingface/datasets/issues/6742/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2024-03-19T18:15:13Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6742.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6742",
"merged_at": "2024-03-19T18:15:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6742.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6742"
} | PR_kwDODunzps5qGSfG | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6742). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005394 / 0.011353 (-0.005959) | 0.003780 / 0.011008 (-0.007228) | 0.063459 / 0.038508 (0.024951) | 0.028883 / 0.023109 (0.005774) | 0.239159 / 0.275898 (-0.036739) | 0.258123 / 0.323480 (-0.065357) | 0.003134 / 0.007986 (-0.004851) | 0.003452 / 0.004328 (-0.000876) | 0.049255 / 0.004250 (0.045005) | 0.042727 / 0.037052 (0.005675) | 0.257387 / 0.258489 (-0.001102) | 0.280762 / 0.293841 (-0.013079) | 0.027921 / 0.128546 (-0.100625) | 0.010867 / 0.075646 (-0.064779) | 0.207878 / 0.419271 (-0.211393) | 0.036003 / 0.043533 (-0.007530) | 0.247457 / 0.255139 (-0.007682) | 0.260231 / 0.283200 (-0.022969) | 0.019741 / 0.141683 (-0.121942) | 1.143645 / 1.452155 (-0.308510) | 1.188789 / 1.492716 (-0.303927) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092065 / 0.018006 (0.074059) | 0.286021 / 0.000490 (0.285531) | 0.000220 / 0.000200 (0.000020) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018934 / 0.037411 (-0.018477) | 0.062474 / 0.014526 (0.047949) | 0.073384 / 0.176557 (-0.103172) | 0.121276 / 0.737135 (-0.615860) | 0.077792 / 0.296338 (-0.218546) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285352 / 0.215209 (0.070143) | 2.783110 / 2.077655 (0.705456) | 1.487983 / 1.504120 (-0.016137) | 1.364264 / 1.541195 (-0.176930) | 1.388757 / 1.468490 (-0.079733) | 0.568347 / 4.584777 (-4.016430) | 2.402451 / 3.745712 (-1.343261) | 2.835577 / 5.269862 (-2.434285) | 1.754853 / 4.565676 (-2.810824) | 0.063355 / 0.424275 (-0.360920) | 0.005010 / 0.007607 (-0.002598) | 0.332061 / 0.226044 (0.106016) | 3.287121 / 2.268929 (1.018193) | 1.829520 / 55.444624 (-53.615104) | 1.542669 / 6.876477 (-5.333808) | 1.560679 / 2.142072 (-0.581393) | 0.642371 / 4.805227 (-4.162856) | 0.118636 / 6.500664 (-6.382028) | 0.042262 / 0.075469 (-0.033207) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.984803 / 1.841788 (-0.856985) | 11.578044 / 8.074308 (3.503735) | 9.383428 / 10.191392 (-0.807964) | 0.141367 / 0.680424 (-0.539057) | 0.014047 / 0.534201 (-0.520154) | 0.291505 / 0.579283 (-0.287778) | 0.270199 / 0.434364 (-0.164165) | 0.329874 / 0.540337 (-0.210463) | 0.429386 / 1.386936 (-0.957550) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005322 / 0.011353 (-0.006031) | 0.004023 / 0.011008 (-0.006986) | 0.050126 / 0.038508 (0.011618) | 0.029937 / 0.023109 (0.006828) | 0.275985 / 0.275898 (0.000087) | 0.297965 / 0.323480 (-0.025515) | 0.004429 / 0.007986 (-0.003557) | 0.002729 / 0.004328 (-0.001599) | 0.048995 / 0.004250 (0.044744) | 0.044940 / 0.037052 (0.007888) | 0.288397 / 0.258489 (0.029908) | 0.317716 / 0.293841 (0.023875) | 0.029705 / 0.128546 (-0.098841) | 0.010972 / 0.075646 (-0.064674) | 0.058592 / 0.419271 (-0.360680) | 0.054640 / 0.043533 (0.011108) | 0.276456 / 0.255139 (0.021317) | 0.295119 / 0.283200 (0.011919) | 0.020032 / 0.141683 (-0.121651) | 1.175740 / 1.452155 (-0.276415) | 1.227246 / 1.492716 (-0.265471) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092204 / 0.018006 (0.074197) | 0.300344 / 0.000490 (0.299855) | 0.000213 / 0.000200 (0.000013) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021540 / 0.037411 (-0.015871) | 0.076252 / 0.014526 (0.061726) | 0.087582 / 0.176557 (-0.088975) | 0.125977 / 0.737135 (-0.611159) | 0.090649 / 0.296338 (-0.205689) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294544 / 0.215209 (0.079335) | 2.883736 / 2.077655 (0.806082) | 1.570932 / 1.504120 (0.066812) | 1.449082 / 1.541195 (-0.092113) | 1.463262 / 1.468490 (-0.005228) | 0.559625 / 4.584777 (-4.025152) | 2.448593 / 3.745712 (-1.297119) | 2.663857 / 5.269862 (-2.606005) | 1.757812 / 4.565676 (-2.807865) | 0.061999 / 0.424275 (-0.362276) | 0.005100 / 0.007607 (-0.002507) | 0.343620 / 0.226044 (0.117575) | 3.487059 / 2.268929 (1.218130) | 1.963078 / 55.444624 (-53.481546) | 1.661758 / 6.876477 (-5.214719) | 1.799130 / 2.142072 (-0.342942) | 0.650194 / 4.805227 (-4.155034) | 0.117375 / 6.500664 (-6.383289) | 0.040957 / 0.075469 (-0.034512) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.037882 / 1.841788 (-0.803906) | 12.239784 / 8.074308 (4.165476) | 10.478186 / 10.191392 (0.286794) | 0.164446 / 0.680424 (-0.515978) | 0.014901 / 0.534201 (-0.519300) | 0.302485 / 0.579283 (-0.276798) | 0.283994 / 0.434364 (-0.150370) | 0.338473 / 0.540337 (-0.201864) | 0.468901 / 1.386936 (-0.918035) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5fa934e275d240d9b1228b2f598bc96390299339 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6742/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6742 | https://github.com/huggingface/datasets/pull/6742 | true |
2,194,626,108 | https://api.github.com/repos/huggingface/datasets/issues/6741/labels{/name} | Reported in https://github.com/huggingface/datasets/issues/4760
The cache was not able to reload a dataset with a single config form the cache if the config name is not specificed
For example
```python
from datasets import load_dataset, config
config.HF_DATASETS_OFFLINE = True
load_dataset("openai_humaneval")
```
This was due to a regression in https://github.com/huggingface/datasets/pull/6632 | 2024-03-25T16:35:21Z | 6,741 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-19T10:48:32Z | https://api.github.com/repos/huggingface/datasets/issues/6741/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6741/timeline | Fix offline mode with single config | https://api.github.com/repos/huggingface/datasets/issues/6741/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2024-03-25T16:23:59Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6741.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6741",
"merged_at": "2024-03-25T16:23:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6741.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6741"
} | PR_kwDODunzps5qEiu3 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6741). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005093 / 0.011353 (-0.006260) | 0.003317 / 0.011008 (-0.007692) | 0.064795 / 0.038508 (0.026287) | 0.030373 / 0.023109 (0.007263) | 0.258776 / 0.275898 (-0.017122) | 0.269768 / 0.323480 (-0.053711) | 0.004186 / 0.007986 (-0.003799) | 0.002630 / 0.004328 (-0.001699) | 0.048643 / 0.004250 (0.044392) | 0.044220 / 0.037052 (0.007168) | 0.265113 / 0.258489 (0.006624) | 0.292202 / 0.293841 (-0.001639) | 0.027468 / 0.128546 (-0.101079) | 0.010123 / 0.075646 (-0.065523) | 0.226869 / 0.419271 (-0.192402) | 0.035739 / 0.043533 (-0.007794) | 0.253193 / 0.255139 (-0.001946) | 0.271002 / 0.283200 (-0.012198) | 0.017201 / 0.141683 (-0.124482) | 1.105836 / 1.452155 (-0.346318) | 1.161559 / 1.492716 (-0.331158) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090481 / 0.018006 (0.072475) | 0.299013 / 0.000490 (0.298524) | 0.000220 / 0.000200 (0.000020) | 0.000047 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017684 / 0.037411 (-0.019727) | 0.061580 / 0.014526 (0.047054) | 0.074370 / 0.176557 (-0.102186) | 0.119468 / 0.737135 (-0.617667) | 0.074671 / 0.296338 (-0.221668) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284778 / 0.215209 (0.069569) | 2.780241 / 2.077655 (0.702586) | 1.504025 / 1.504120 (-0.000095) | 1.386644 / 1.541195 (-0.154550) | 1.402038 / 1.468490 (-0.066452) | 0.555180 / 4.584777 (-4.029597) | 2.410973 / 3.745712 (-1.334740) | 2.773252 / 5.269862 (-2.496610) | 1.722784 / 4.565676 (-2.842892) | 0.062773 / 0.424275 (-0.361502) | 0.004959 / 0.007607 (-0.002648) | 0.337163 / 0.226044 (0.111119) | 3.356947 / 2.268929 (1.088019) | 1.880953 / 55.444624 (-53.563671) | 1.556049 / 6.876477 (-5.320427) | 1.578589 / 2.142072 (-0.563483) | 0.641993 / 4.805227 (-4.163234) | 0.118624 / 6.500664 (-6.382040) | 0.042202 / 0.075469 (-0.033268) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.995321 / 1.841788 (-0.846467) | 12.257597 / 8.074308 (4.183289) | 9.646214 / 10.191392 (-0.545178) | 0.131124 / 0.680424 (-0.549300) | 0.014119 / 0.534201 (-0.520082) | 0.287597 / 0.579283 (-0.291686) | 0.266983 / 0.434364 (-0.167381) | 0.328165 / 0.540337 (-0.212173) | 0.422405 / 1.386936 (-0.964531) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005091 / 0.011353 (-0.006262) | 0.003358 / 0.011008 (-0.007650) | 0.049136 / 0.038508 (0.010628) | 0.031075 / 0.023109 (0.007966) | 0.275047 / 0.275898 (-0.000851) | 0.296845 / 0.323480 (-0.026635) | 0.004949 / 0.007986 (-0.003037) | 0.002586 / 0.004328 (-0.001743) | 0.048164 / 0.004250 (0.043913) | 0.040754 / 0.037052 (0.003702) | 0.288715 / 0.258489 (0.030226) | 0.312383 / 0.293841 (0.018542) | 0.029372 / 0.128546 (-0.099174) | 0.010097 / 0.075646 (-0.065549) | 0.056752 / 0.419271 (-0.362520) | 0.033128 / 0.043533 (-0.010405) | 0.274986 / 0.255139 (0.019847) | 0.292692 / 0.283200 (0.009493) | 0.018309 / 0.141683 (-0.123374) | 1.190320 / 1.452155 (-0.261834) | 1.222529 / 1.492716 (-0.270188) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091717 / 0.018006 (0.073711) | 0.300278 / 0.000490 (0.299788) | 0.000217 / 0.000200 (0.000017) | 0.000065 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021394 / 0.037411 (-0.016018) | 0.074918 / 0.014526 (0.060392) | 0.087461 / 0.176557 (-0.089095) | 0.125499 / 0.737135 (-0.611636) | 0.087484 / 0.296338 (-0.208854) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296557 / 0.215209 (0.081348) | 2.905527 / 2.077655 (0.827872) | 1.624640 / 1.504120 (0.120520) | 1.505495 / 1.541195 (-0.035700) | 1.514066 / 1.468490 (0.045576) | 0.569376 / 4.584777 (-4.015401) | 2.448575 / 3.745712 (-1.297137) | 2.772805 / 5.269862 (-2.497057) | 1.757287 / 4.565676 (-2.808390) | 0.064209 / 0.424275 (-0.360066) | 0.005688 / 0.007607 (-0.001919) | 0.353175 / 0.226044 (0.127131) | 3.481591 / 2.268929 (1.212662) | 1.995384 / 55.444624 (-53.449240) | 1.684623 / 6.876477 (-5.191854) | 1.675750 / 2.142072 (-0.466323) | 0.644463 / 4.805227 (-4.160764) | 0.115393 / 6.500664 (-6.385271) | 0.040671 / 0.075469 (-0.034799) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.037487 / 1.841788 (-0.804301) | 11.902194 / 8.074308 (3.827886) | 10.148579 / 10.191392 (-0.042813) | 0.150261 / 0.680424 (-0.530163) | 0.015001 / 0.534201 (-0.519200) | 0.291008 / 0.579283 (-0.288275) | 0.278758 / 0.434364 (-0.155606) | 0.334037 / 0.540337 (-0.206301) | 0.419942 / 1.386936 (-0.966994) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#dcd01046388fc052d37acc5a450bea69e3c57afc \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6741/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6741 | https://github.com/huggingface/datasets/pull/6741 | true |
2,231,400,200 | https://api.github.com/repos/huggingface/datasets/issues/6793/labels{/name} | ### Describe the bug
I'd expect the following code to download just the validation split but instead I get all data on my disk (train, test and validation splits)
`
from datasets import load_dataset
dataset = load_dataset("imagenet-1k", split="validation", trust_remote_code=True)
`
Is it expected to work like that?
### Steps to reproduce the bug
1. Install the required libraries (python, datasets, huggingface_hub)
2. Login using huggingface cli
2. Run the code in the description
### Expected behavior
Just a single (validation) split should be downloaded.
### Environment info
python: 3.12.2
datasets: 2.18.0
huggingface_hub: 0.22.2 | 2024-04-08T14:39:14Z | 6,793 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-08T14:39:14Z | https://api.github.com/repos/huggingface/datasets/issues/6793/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6793/timeline | Loading just one particular split is not possible for imagenet-1k | https://api.github.com/repos/huggingface/datasets/issues/6793/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/165930106?v=4",
"events_url": "https://api.github.com/users/PaulPSta/events{/privacy}",
"followers_url": "https://api.github.com/users/PaulPSta/followers",
"following_url": "https://api.github.com/users/PaulPSta/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulPSta/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PaulPSta",
"id": 165930106,
"login": "PaulPSta",
"node_id": "U_kgDOCePkeg",
"organizations_url": "https://api.github.com/users/PaulPSta/orgs",
"received_events_url": "https://api.github.com/users/PaulPSta/received_events",
"repos_url": "https://api.github.com/users/PaulPSta/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PaulPSta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulPSta/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PaulPSta"
} | [] | null | null | NONE | null | null | I_kwDODunzps6FAHcI | [] | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6793/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6793 | https://github.com/huggingface/datasets/issues/6793 | false |
2,231,318,682 | https://api.github.com/repos/huggingface/datasets/issues/6792/labels{/name} | It was reloading from the wrong cache dir because of a bug in `_check_legacy_cache2`. This function should not trigger if there are config_kwars like `sample_by=`
fix https://github.com/huggingface/datasets/issues/6758 | 2024-04-08T15:55:21Z | 6,792 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-04-08T14:05:42Z | https://api.github.com/repos/huggingface/datasets/issues/6792/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6792/timeline | Fix cache conflict in `_check_legacy_cache2` | https://api.github.com/repos/huggingface/datasets/issues/6792/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6792.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6792",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6792.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6792"
} | PR_kwDODunzps5sBEyn | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6792). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6792/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6792 | https://github.com/huggingface/datasets/pull/6792 | true |
2,230,102,332 | https://api.github.com/repos/huggingface/datasets/issues/6791/labels{/name} | ### Describe the bug
Calling `add_faiss_index` on a `Dataset` with a column argument raises a ValueError. The following is the trace
```python
214 def replacement_add(self, x):
215 """Adds vectors to the index.
216 The index must be trained before vectors can be added to it.
217 The vectors are implicitly numbered in sequence. When `n` vectors are
(...)
224 `dtype` must be float32.
225 """
--> 227 n, d = x.shape
228 assert d == self.d
229 x = np.ascontiguousarray(x, dtype='float32')
ValueError: not enough values to unpack (expected 2, got 1)
```
### Steps to reproduce the bug
1. Load any dataset like `ds = datasets.load_dataset("wikimedia/wikipedia", "20231101.en")["train"]`
2. Add an FAISS index on any column `ds.add_faiss_index('title')`
### Expected behavior
The index should be created
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-6.5.0-26-generic-x86_64-with-glibc2.35
- Python version: 3.9.19
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0
- `faiss-cpu` version: 1.8.0 | 2024-04-09T01:30:55Z | 6,791 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-08T01:57:03Z | https://api.github.com/repos/huggingface/datasets/issues/6791/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6791/timeline | `add_faiss_index` raises ValueError: not enough values to unpack (expected 2, got 1) | https://api.github.com/repos/huggingface/datasets/issues/6791/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/40491005?v=4",
"events_url": "https://api.github.com/users/NeuralFlux/events{/privacy}",
"followers_url": "https://api.github.com/users/NeuralFlux/followers",
"following_url": "https://api.github.com/users/NeuralFlux/following{/other_user}",
"gists_url": "https://api.github.com/users/NeuralFlux/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NeuralFlux",
"id": 40491005,
"login": "NeuralFlux",
"node_id": "MDQ6VXNlcjQwNDkxMDA1",
"organizations_url": "https://api.github.com/users/NeuralFlux/orgs",
"received_events_url": "https://api.github.com/users/NeuralFlux/received_events",
"repos_url": "https://api.github.com/users/NeuralFlux/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NeuralFlux/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NeuralFlux/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NeuralFlux"
} | [] | null | null | NONE | null | null | I_kwDODunzps6E7Kk8 | [
"I realized I was passing a string column to this instead of float. Is it possible to add a warning or error to prevent users from falsely believing there's a bug?",
"Hello!\r\n\r\nI agree that we could add some safeguards around the type of `ds[column]`. At least for FAISS, we need the column to be made of embeddings as FAISS doesn't perform the embeddings itself.\r\n\r\nI can propose a PR sometime this week.",
"@Dref360 thanks for the initiative!"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6791/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6791 | https://github.com/huggingface/datasets/issues/6791 | false |
2,229,915,236 | https://api.github.com/repos/huggingface/datasets/issues/6790/labels{/name} | ### Describe the bug
Hello,
I've been struggling with a problem using Huggingface datasets caused by PyArrow memory allocation. I finally managed to solve it, and thought to document it since similar issues have been raised here before (https://github.com/huggingface/datasets/issues/5710, https://github.com/huggingface/datasets/issues/6176).
In my case, I was trying to load ~70k dataset files from disk using `datasets.load_from_disk(data_path)` (meaning 70k repeated calls to load_from_disk). This triggered an (uninformative) exception around 64k loaded files:
```
File "pyarrow/io.pxi", line 1053, in pyarrow.lib.memory_map
File "pyarrow/io.pxi", line 1000, in pyarrow.lib.MemoryMappedFile._open
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
OSError: Memory mapping file failed: Cannot allocate memory
```
Despite system RAM usage being very low. After a lot of digging around, I discovered that my Ubuntu machine had a limit on the maximum number of memory mapped files in `/proc/sys/vm/max_map_count` set to 65530, which was causing my data loader to crash. Increasing the limit in the file (`echo <new_mmap_size> | sudo tee /proc/sys/vm/max_map_count`) made the issue go away.
While this isn't a bug as such in either Datasets or PyArrow, this behavior can be very confusing to users. Maybe this should be mentioned in documentation? I suspect the other issues raised here about memory mapping OOM errors could actually be consequence of system configuration.
Br,
Lauri
### Steps to reproduce the bug
```
import numpy as np
import pyarrow as pa
import tqdm
# Write some data to disk
arr = pa.array(np.arange(100))
schema = pa.schema([
pa.field('nums', arr.type)
])
with pa.OSFile('arraydata.arrow', 'wb') as sink:
with pa.ipc.new_file(sink, schema=schema) as writer:
batch = pa.record_batch([arr], schema=schema)
writer.write(batch)
# Number of times to open the memory map
nums = 70000
# Read the data back
arrays = [pa.memory_map('arraydata.arrow', 'r') for _ in tqdm.tqdm(range(nums))]
```
### Expected behavior
No errors.
### Environment info
datasets: 2.18.0
pyarrow: 15.0.0 | 2024-04-07T20:00:54Z | 6,790 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-07T19:25:39Z | https://api.github.com/repos/huggingface/datasets/issues/6790/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6790/timeline | PyArrow 'Memory mapping file failed: Cannot allocate memory' bug | https://api.github.com/repos/huggingface/datasets/issues/6790/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/25725697?v=4",
"events_url": "https://api.github.com/users/lasuomela/events{/privacy}",
"followers_url": "https://api.github.com/users/lasuomela/followers",
"following_url": "https://api.github.com/users/lasuomela/following{/other_user}",
"gists_url": "https://api.github.com/users/lasuomela/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lasuomela",
"id": 25725697,
"login": "lasuomela",
"node_id": "MDQ6VXNlcjI1NzI1Njk3",
"organizations_url": "https://api.github.com/users/lasuomela/orgs",
"received_events_url": "https://api.github.com/users/lasuomela/received_events",
"repos_url": "https://api.github.com/users/lasuomela/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lasuomela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lasuomela/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lasuomela"
} | [] | null | null | NONE | null | null | I_kwDODunzps6E6c5k | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6790/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6790 | https://github.com/huggingface/datasets/issues/6790 | false |
2,229,527,001 | https://api.github.com/repos/huggingface/datasets/issues/6789/labels{/name} | ### Describe the bug
Map has been taking extremely long to preprocess my data.
It seems to process 1000 examples (which it does really fast in about 10 seconds), then it hangs for a good 1-2 minutes, before it moves on to the next batch of 1000 examples.
It also keeps eating up my hard drive space for some reason by creating a file named tmp1335llua that is over 300GB.
Trying to set num_proc to be >1 also gives me the following error: NameError: name 'processor' is not defined
Please advise on how I could optimise this?
### Steps to reproduce the bug
In general, I have been using map as per normal. Here is a snippet of my code:
````
########################### DATASET LOADING AND PREP #########################
def load_custom_dataset(split):
ds = []
if split == 'train':
for dset in args.train_datasets:
ds.append(load_from_disk(dset))
if split == 'test':
for dset in args.test_datasets:
ds.append(load_from_disk(dset))
ds_to_return = concatenate_datasets(ds)
ds_to_return = ds_to_return.shuffle(seed=22)
return ds_to_return
def prepare_dataset(batch):
# load and (possibly) resample audio data to 16kHz
audio = batch["audio"]
# compute log-Mel input features from input audio array
batch["input_features"] = processor.feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0]
# compute input length of audio sample in seconds
batch["input_length"] = len(audio["array"]) / audio["sampling_rate"]
# optional pre-processing steps
transcription = batch["sentence"]
if do_lower_case:
transcription = transcription.lower()
if do_remove_punctuation:
transcription = normalizer(transcription).strip()
# encode target text to label ids
batch["labels"] = processor.tokenizer(transcription).input_ids
return batch
print('DATASET PREPARATION IN PROGRESS...')
# case 3: combine_and_shuffle is true, only train provided
# load train datasets
train_set = load_custom_dataset('train')
# split dataset
raw_dataset = DatasetDict()
raw_dataset = train_set.train_test_split(test_size = args.test_size, shuffle=True, seed=42)
raw_dataset = raw_dataset.cast_column("audio", Audio(sampling_rate=args.sampling_rate))
print("Before Map:")
print(raw_dataset)
raw_dataset = raw_dataset.map(prepare_dataset, num_proc=1)
print("After Map:")
print(raw_dataset)
````
### Expected behavior
Based on the speed at which map is processing examples, I would expect a 5-6 hours completion for all mapping
However, because it hangs every 1000 examples, I instead roughly estimate it would take about 40 hours!
Moreover, i cant even finish the map because it keeps exponentially eating up my hard drive space
### Environment info
- `datasets` version: 2.18.0
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.10.14
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0 | 2024-04-08T09:37:28Z | 6,789 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-07T02:52:06Z | https://api.github.com/repos/huggingface/datasets/issues/6789/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6789/timeline | Issue with map | https://api.github.com/repos/huggingface/datasets/issues/6789/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/102672238?v=4",
"events_url": "https://api.github.com/users/Nsohko/events{/privacy}",
"followers_url": "https://api.github.com/users/Nsohko/followers",
"following_url": "https://api.github.com/users/Nsohko/following{/other_user}",
"gists_url": "https://api.github.com/users/Nsohko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Nsohko",
"id": 102672238,
"login": "Nsohko",
"node_id": "U_kgDOBh6nbg",
"organizations_url": "https://api.github.com/users/Nsohko/orgs",
"received_events_url": "https://api.github.com/users/Nsohko/received_events",
"repos_url": "https://api.github.com/users/Nsohko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Nsohko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nsohko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Nsohko"
} | [] | null | null | NONE | null | null | I_kwDODunzps6E4-HZ | [
"Default `writer_batch_size `is set to 1000 (see [map](https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/main_classes#datasets.Dataset.map)).\r\nThe \"tmp1335llua\" is probably the temp file it creates while writing to disk.\r\nMaybe try lowering the `writer_batch_size`.\r\n\r\nFor multi-processing you should probably pass the `processor `as an argument (with e.g. partial) to the function or create it inside so that the sub-processes have access to it and maybe add `if __name__ == \"__main__\"` (not sure that's necessary?).\r\n",
"Hi @Modexus,\r\n\r\nThank you very much for the help! Yep after playing around with map, I managed to get the parallel processing to work by implementing it like you suggested.\r\n\r\nRegarding the temp files, it seems like the temp files just keep growing in size as the map continues. Eventually, once map finishes, the temp files are deleted, but they are instead saved as cache .arrow files. These cache files are absolutely gigantic (~ 30-50x the size of the initial dataset!).\r\n\r\nAfter playing around with the `prepare_dataset()` function above, it seems this issue is caused by the following line in the function, where the log-Mel spectrogram of the audio is calculated:\r\n\r\n`# compute log-Mel input features from input audio array\r\n batch[\"input_features\"] = processor.feature_extractor(audio[\"array\"], \r\n sampling_rate=audio[\"sampling_rate\"]).input_features[0]\r\n`\r\n\r\nWhen I remove this line, the final cache files are approximately the same size as the initial dataset.\r\n\r\nCan I check whether this is expected behavior with the whisper feature extractor? I cant imagine the spectrograms are that large!\r\n\r\nThank you so much for the help!"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6789/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6789 | https://github.com/huggingface/datasets/issues/6789 | false |
2,229,207,521 | https://api.github.com/repos/huggingface/datasets/issues/6788/labels{/name} | ### Describe the bug
Hello,
I have a question regarding the map function in the Hugging Face datasets.
The situation is as follows: when I load a jsonl file using load_dataset(..., streaming=False), and then utilize the map function to process it, I specify that the returned example should be of type Torch.tensor. However, I noticed that after applying the map function, the datatype automatically changes to List, which leads to errors in my program.
I attempted to use load_dataset(..., streaming=True), and this issue no longer occurs. I'm not entirely clear on why this happens. Could you please provide some insights into this?
### Steps to reproduce the bug
1.dataset = load_dataset(xxx, streaming = False)
2. dataset.map(function), function will return torch.Tensor.
3. you will find the format of data in dataset is List.
### Expected behavior
I expected to receieve the format of data is torch.Tensor.
### Environment info
2.18.0 | 2024-04-06T11:52:39Z | 6,788 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-06T11:45:23Z | https://api.github.com/repos/huggingface/datasets/issues/6788/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6788/timeline | A Question About the Map Function | https://api.github.com/repos/huggingface/datasets/issues/6788/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/87431052?v=4",
"events_url": "https://api.github.com/users/ys-lan/events{/privacy}",
"followers_url": "https://api.github.com/users/ys-lan/followers",
"following_url": "https://api.github.com/users/ys-lan/following{/other_user}",
"gists_url": "https://api.github.com/users/ys-lan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ys-lan",
"id": 87431052,
"login": "ys-lan",
"node_id": "MDQ6VXNlcjg3NDMxMDUy",
"organizations_url": "https://api.github.com/users/ys-lan/orgs",
"received_events_url": "https://api.github.com/users/ys-lan/received_events",
"repos_url": "https://api.github.com/users/ys-lan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ys-lan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ys-lan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ys-lan"
} | [] | null | null | NONE | null | null | I_kwDODunzps6E3wHh | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6788/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6788 | https://github.com/huggingface/datasets/issues/6788 | false |
2,229,103,264 | https://api.github.com/repos/huggingface/datasets/issues/6787/labels{/name} | ### Describe the bug
```python
from datasets import Dataset
def worker(example):
while True:
continue
example['a'] = 100
return example
data = Dataset.from_list([{"a": 1}, {"a": 2}])
data = data.map(worker)
print(data[0])
```
I'm implementing a worker function whose runtime will depend on specific examples (e.g., while most examples take 0.01s in worker, several examples may take 50s).
Therefore, I would like to know how the current implementation will handle those subprocesses that require a long (e.g., >= 5min) or even infinite time.
I notice that the current implementation set a timeout of 0.05 second
https://github.com/huggingface/datasets/blob/c3ddb1ef00334a6f973679a51e783905fbc9ef0b/src/datasets/utils/py_utils.py#L674
However, this example code still gets stuck.
### Steps to reproduce the bug
run the example above
### Expected behavior
I want to set a default worker to handle these timeout cases, instead of getting stuck
### Environment info
main branch version | 2024-04-08T14:47:18Z | 6,787 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-06T06:25:39Z | https://api.github.com/repos/huggingface/datasets/issues/6787/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6787/timeline | TimeoutError in map | https://api.github.com/repos/huggingface/datasets/issues/6787/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/48146603?v=4",
"events_url": "https://api.github.com/users/Jiaxin-Wen/events{/privacy}",
"followers_url": "https://api.github.com/users/Jiaxin-Wen/followers",
"following_url": "https://api.github.com/users/Jiaxin-Wen/following{/other_user}",
"gists_url": "https://api.github.com/users/Jiaxin-Wen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Jiaxin-Wen",
"id": 48146603,
"login": "Jiaxin-Wen",
"node_id": "MDQ6VXNlcjQ4MTQ2NjAz",
"organizations_url": "https://api.github.com/users/Jiaxin-Wen/orgs",
"received_events_url": "https://api.github.com/users/Jiaxin-Wen/received_events",
"repos_url": "https://api.github.com/users/Jiaxin-Wen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Jiaxin-Wen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jiaxin-Wen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Jiaxin-Wen"
} | [] | null | null | CONTRIBUTOR | null | null | I_kwDODunzps6E3Wqg | [
"From my current understanding, this timeout is only used when we need to get the results.\r\n\r\nOne of:\r\n1. All tasks are done\r\n2. One worker died\r\n\r\nYour function should work fine and it's definitely a bug if it doesn't."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6787/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6787 | https://github.com/huggingface/datasets/issues/6787 | false |
2,228,463,776 | https://api.github.com/repos/huggingface/datasets/issues/6786/labels{/name} | PR for issue #6782.
Makes `cast_storage` of the `Image` class faster by removing the slow call to `.pylist`.
Instead directly convert each `ListArray` item to either `Array2DExtensionType` or `Array3DExtensionType`.
This also preserves the `dtype` removing the warning if the array is already `uint8`. | 2024-04-08T09:18:42Z | 6,786 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-04-05T17:00:46Z | https://api.github.com/repos/huggingface/datasets/issues/6786/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6786/timeline | Make Image cast storage faster | https://api.github.com/repos/huggingface/datasets/issues/6786/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Modexus",
"id": 37351874,
"login": "Modexus",
"node_id": "MDQ6VXNlcjM3MzUxODc0",
"organizations_url": "https://api.github.com/users/Modexus/orgs",
"received_events_url": "https://api.github.com/users/Modexus/received_events",
"repos_url": "https://api.github.com/users/Modexus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Modexus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Modexus"
} | [] | null | null | NONE | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6786.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6786",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6786.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6786"
} | PR_kwDODunzps5r3kWg | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6786). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6786/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6786 | https://github.com/huggingface/datasets/pull/6786 | true |
2,228,429,852 | https://api.github.com/repos/huggingface/datasets/issues/6785/labels{/name} | See https://github.com/huggingface/dataset-viewer/issues/2650
Tell me if it's OK, or if it's a breaking change that must be handled differently.
Also note that the docs page is still https://huggingface.co/docs/datasets-server/, so I didn't change it.
And the API URL is still https://datasets-server.huggingface.co/ (and [might always be](https://github.com/huggingface/dataset-viewer/issues/2666)), so I let it too. | 2024-04-08T12:41:13Z | 6,785 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-04-05T16:37:05Z | https://api.github.com/repos/huggingface/datasets/issues/6785/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6785/timeline | rename datasets-server to dataset-viewer | https://api.github.com/repos/huggingface/datasets/issues/6785/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [] | null | null | CONTRIBUTOR | 2024-04-08T12:35:02Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6785.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6785",
"merged_at": "2024-04-08T12:35:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6785.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6785"
} | PR_kwDODunzps5r3dCw | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6785). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005224 / 0.011353 (-0.006129) | 0.003938 / 0.011008 (-0.007070) | 0.063829 / 0.038508 (0.025321) | 0.030975 / 0.023109 (0.007865) | 0.265090 / 0.275898 (-0.010808) | 0.290994 / 0.323480 (-0.032486) | 0.003083 / 0.007986 (-0.004902) | 0.002810 / 0.004328 (-0.001518) | 0.048860 / 0.004250 (0.044609) | 0.044663 / 0.037052 (0.007611) | 0.272161 / 0.258489 (0.013672) | 0.306966 / 0.293841 (0.013125) | 0.028028 / 0.128546 (-0.100518) | 0.010616 / 0.075646 (-0.065031) | 0.211649 / 0.419271 (-0.207623) | 0.035906 / 0.043533 (-0.007626) | 0.251779 / 0.255139 (-0.003360) | 0.275543 / 0.283200 (-0.007657) | 0.017710 / 0.141683 (-0.123973) | 1.127015 / 1.452155 (-0.325139) | 1.173319 / 1.492716 (-0.319397) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090625 / 0.018006 (0.072619) | 0.301973 / 0.000490 (0.301483) | 0.000217 / 0.000200 (0.000017) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018868 / 0.037411 (-0.018543) | 0.062402 / 0.014526 (0.047876) | 0.074053 / 0.176557 (-0.102504) | 0.121484 / 0.737135 (-0.615652) | 0.078674 / 0.296338 (-0.217664) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277821 / 0.215209 (0.062612) | 2.761642 / 2.077655 (0.683987) | 1.452735 / 1.504120 (-0.051385) | 1.336303 / 1.541195 (-0.204891) | 1.343045 / 1.468490 (-0.125445) | 0.560917 / 4.584777 (-4.023860) | 2.353427 / 3.745712 (-1.392286) | 2.699067 / 5.269862 (-2.570795) | 1.704752 / 4.565676 (-2.860925) | 0.062668 / 0.424275 (-0.361607) | 0.005120 / 0.007607 (-0.002487) | 0.330455 / 0.226044 (0.104410) | 3.264604 / 2.268929 (0.995675) | 1.791940 / 55.444624 (-53.652685) | 1.526083 / 6.876477 (-5.350394) | 1.541429 / 2.142072 (-0.600643) | 0.630343 / 4.805227 (-4.174884) | 0.115189 / 6.500664 (-6.385475) | 0.041716 / 0.075469 (-0.033753) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975008 / 1.841788 (-0.866779) | 11.326924 / 8.074308 (3.252616) | 9.810300 / 10.191392 (-0.381092) | 0.141068 / 0.680424 (-0.539356) | 0.013950 / 0.534201 (-0.520251) | 0.285691 / 0.579283 (-0.293592) | 0.257968 / 0.434364 (-0.176396) | 0.322976 / 0.540337 (-0.217361) | 0.411114 / 1.386936 (-0.975822) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005176 / 0.011353 (-0.006177) | 0.003631 / 0.011008 (-0.007377) | 0.050006 / 0.038508 (0.011498) | 0.030622 / 0.023109 (0.007513) | 0.277364 / 0.275898 (0.001466) | 0.299752 / 0.323480 (-0.023728) | 0.004110 / 0.007986 (-0.003876) | 0.002694 / 0.004328 (-0.001634) | 0.048966 / 0.004250 (0.044715) | 0.039634 / 0.037052 (0.002582) | 0.289959 / 0.258489 (0.031470) | 0.320689 / 0.293841 (0.026848) | 0.029285 / 0.128546 (-0.099261) | 0.010435 / 0.075646 (-0.065211) | 0.057432 / 0.419271 (-0.361840) | 0.032554 / 0.043533 (-0.010979) | 0.277354 / 0.255139 (0.022215) | 0.296872 / 0.283200 (0.013673) | 0.017338 / 0.141683 (-0.124344) | 1.134174 / 1.452155 (-0.317981) | 1.184695 / 1.492716 (-0.308021) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089953 / 0.018006 (0.071947) | 0.299372 / 0.000490 (0.298882) | 0.000212 / 0.000200 (0.000012) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021349 / 0.037411 (-0.016062) | 0.075167 / 0.014526 (0.060641) | 0.085910 / 0.176557 (-0.090647) | 0.124729 / 0.737135 (-0.612406) | 0.088313 / 0.296338 (-0.208025) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291939 / 0.215209 (0.076730) | 2.851077 / 2.077655 (0.773423) | 1.609382 / 1.504120 (0.105262) | 1.469656 / 1.541195 (-0.071539) | 1.490469 / 1.468490 (0.021979) | 0.570421 / 4.584777 (-4.014356) | 2.441438 / 3.745712 (-1.304274) | 2.756514 / 5.269862 (-2.513347) | 1.714202 / 4.565676 (-2.851474) | 0.063656 / 0.424275 (-0.360619) | 0.005640 / 0.007607 (-0.001967) | 0.336240 / 0.226044 (0.110196) | 3.355434 / 2.268929 (1.086505) | 1.947553 / 55.444624 (-53.497072) | 1.672776 / 6.876477 (-5.203700) | 1.685316 / 2.142072 (-0.456757) | 0.638849 / 4.805227 (-4.166378) | 0.116304 / 6.500664 (-6.384360) | 0.041588 / 0.075469 (-0.033881) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.026700 / 1.841788 (-0.815088) | 12.044628 / 8.074308 (3.970319) | 10.464007 / 10.191392 (0.272615) | 0.156169 / 0.680424 (-0.524255) | 0.015624 / 0.534201 (-0.518577) | 0.287233 / 0.579283 (-0.292050) | 0.270374 / 0.434364 (-0.163990) | 0.325255 / 0.540337 (-0.215083) | 0.412021 / 1.386936 (-0.974915) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6f7f1718e3db54d7923ebe4383301fdd380c18b9 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6785/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6785 | https://github.com/huggingface/datasets/pull/6785 | true |
2,228,390,504 | https://api.github.com/repos/huggingface/datasets/issues/6784/labels{/name} | Instead of waiting for data files to be extracted in the packaged builders, we can prepend the compression prefix and extract them as they are being read (using `fsspec`). This saves disk space (deleting extracted archives is not set by default) and slightly speeds up dataset generation (less disk reads) | 2024-04-08T23:33:24Z | 6,784 | null | https://api.github.com/repos/huggingface/datasets | true | [] | 2024-04-05T16:12:25Z | https://api.github.com/repos/huggingface/datasets/issues/6784/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6784/timeline | Extract data on the fly in packaged builders | https://api.github.com/repos/huggingface/datasets/issues/6784/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6784.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6784",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6784.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6784"
} | PR_kwDODunzps5r3UTj | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6784). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6784/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6784 | https://github.com/huggingface/datasets/pull/6784 | true |
2,228,179,466 | https://api.github.com/repos/huggingface/datasets/issues/6783/labels{/name} | ### Describe the bug
# problem
I can't resample audio dataset in Kaggle Notebook. It looks like some code in `datasets` library use aliases that were deprecated in NumPy 1.20.
## code for resampling
```
from datasets import load_dataset, Audio
from transformers import AutoFeatureExtractor
from transformers import AutoModelForAudioClassification, TrainingArguments, Trainer
minds = load_dataset("PolyAI/minds14", name="en-US", split="train")
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
def preprocess_function(examples):
audio_arrays = [x["array"] for x in examples["audio"]]
inputs = feature_extractor(
audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=16000, truncation=True
)
return inputs
dataset = dataset.map(preprocess_function, remove_columns="audio", batched=True, batch_size=100)
```
## the error I got
<details>
<summary>Click to expand</summary>
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[20], line 1
----> 1 dataset = dataset.map(preprocess_function, remove_columns="audio", batched=True, batch_size=100)
2 dataset
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1955, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1952 disable_tqdm = not logging.is_progress_bar_enabled()
1954 if num_proc is None or num_proc == 1:
-> 1955 return self._map_single(
1956 function=function,
1957 with_indices=with_indices,
1958 with_rank=with_rank,
1959 input_columns=input_columns,
1960 batched=batched,
1961 batch_size=batch_size,
1962 drop_last_batch=drop_last_batch,
1963 remove_columns=remove_columns,
1964 keep_in_memory=keep_in_memory,
1965 load_from_cache_file=load_from_cache_file,
1966 cache_file_name=cache_file_name,
1967 writer_batch_size=writer_batch_size,
1968 features=features,
1969 disable_nullable=disable_nullable,
1970 fn_kwargs=fn_kwargs,
1971 new_fingerprint=new_fingerprint,
1972 disable_tqdm=disable_tqdm,
1973 desc=desc,
1974 )
1975 else:
1977 def format_cache_file_name(cache_file_name, rank):
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:520, in transmit_tasks.<locals>.wrapper(*args, **kwargs)
518 self: "Dataset" = kwargs.pop("self")
519 # apply actual function
--> 520 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
521 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
522 for dataset in datasets:
523 # Remove task templates if a column mapping of the template is no longer valid
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:487, in transmit_format.<locals>.wrapper(*args, **kwargs)
480 self_format = {
481 "type": self._format_type,
482 "format_kwargs": self._format_kwargs,
483 "columns": self._format_columns,
484 "output_all_columns": self._output_all_columns,
485 }
486 # apply actual function
--> 487 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
488 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
489 # re-apply format to the output
File /opt/conda/lib/python3.10/site-packages/datasets/fingerprint.py:458, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
452 kwargs[fingerprint_name] = update_fingerprint(
453 self._fingerprint, transform, kwargs_for_fingerprint
454 )
456 # Call actual function
--> 458 out = func(self, *args, **kwargs)
460 # Update fingerprint of in-place transforms + update in-place history of transforms
462 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:2356, in Dataset._map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2354 writer.write_table(batch)
2355 else:
-> 2356 writer.write_batch(batch)
2357 if update_data and writer is not None:
2358 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py:507, in ArrowWriter.write_batch(self, batch_examples, writer_batch_size)
505 col_try_type = try_features[col] if try_features is not None and col in try_features else None
506 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col)
--> 507 arrays.append(pa.array(typed_sequence))
508 inferred_features[col] = typed_sequence.get_inferred_type()
509 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema
File /opt/conda/lib/python3.10/site-packages/pyarrow/array.pxi:236, in pyarrow.lib.array()
File /opt/conda/lib/python3.10/site-packages/pyarrow/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol()
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py:184, in TypedSequence.__arrow_array__(self, type)
182 out = numpy_to_pyarrow_listarray(data)
183 elif isinstance(data, list) and data and isinstance(first_non_null_value(data)[1], np.ndarray):
--> 184 out = list_of_np_array_to_pyarrow_listarray(data)
185 else:
186 trying_cast_to_python_objects = True
File /opt/conda/lib/python3.10/site-packages/datasets/features/features.py:1174, in list_of_np_array_to_pyarrow_listarray(l_arr, type)
1172 """Build a PyArrow ListArray from a possibly nested list of NumPy arrays"""
1173 if len(l_arr) > 0:
-> 1174 return list_of_pa_arrays_to_pyarrow_listarray(
1175 [numpy_to_pyarrow_listarray(arr, type=type) if arr is not None else None for arr in l_arr]
1176 )
1177 else:
1178 return pa.array([], type=type)
File /opt/conda/lib/python3.10/site-packages/datasets/features/features.py:1163, in list_of_pa_arrays_to_pyarrow_listarray(l_arr)
1160 null_indices = [i for i, arr in enumerate(l_arr) if arr is None]
1161 l_arr = [arr for arr in l_arr if arr is not None]
1162 offsets = np.cumsum(
-> 1163 [0] + [len(arr) for arr in l_arr], dtype=np.object
1164 ) # convert to dtype object to allow None insertion
1165 offsets = np.insert(offsets, null_indices, None)
1166 offsets = pa.array(offsets, type=pa.int32())
File /opt/conda/lib/python3.10/site-packages/numpy/__init__.py:324, in __getattr__(attr)
319 warnings.warn(
320 f"In the future `np.{attr}` will be defined as the "
321 "corresponding NumPy scalar.", FutureWarning, stacklevel=2)
323 if attr in __former_attrs__:
--> 324 raise AttributeError(__former_attrs__[attr])
326 if attr == 'testing':
327 import numpy.testing as testing
AttributeError: module 'numpy' has no attribute 'object'.
`np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
```
</details>
### Steps to reproduce the bug
Run above code in Kaggle Notebook.
### Expected behavior
I can resample audio data without fail.
### Environment info
- `datasets` version: 2.1.0
- Platform: Linux-5.15.133+-x86_64-with-glibc2.31
- Python version: 3.10.13
- PyArrow version: 11.0.0
- Pandas version: 2.2.1 | 2024-04-08T16:11:01Z | 6,783 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-05T14:31:48Z | https://api.github.com/repos/huggingface/datasets/issues/6783/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6783/timeline | AttributeError: module 'numpy' has no attribute 'object'. in Kaggle Notebook | https://api.github.com/repos/huggingface/datasets/issues/6783/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/26062262?v=4",
"events_url": "https://api.github.com/users/petrov826/events{/privacy}",
"followers_url": "https://api.github.com/users/petrov826/followers",
"following_url": "https://api.github.com/users/petrov826/following{/other_user}",
"gists_url": "https://api.github.com/users/petrov826/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/petrov826",
"id": 26062262,
"login": "petrov826",
"node_id": "MDQ6VXNlcjI2MDYyMjYy",
"organizations_url": "https://api.github.com/users/petrov826/orgs",
"received_events_url": "https://api.github.com/users/petrov826/received_events",
"repos_url": "https://api.github.com/users/petrov826/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/petrov826/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/petrov826/subscriptions",
"type": "User",
"url": "https://api.github.com/users/petrov826"
} | [] | null | null | NONE | null | null | I_kwDODunzps6Ez1IK | [
"Hi! You can fix this by updating the `datasets` package with `pip install -U datasets` and restarting the notebook.\r\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6783/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6783 | https://github.com/huggingface/datasets/issues/6783 | false |
2,228,081,955 | https://api.github.com/repos/huggingface/datasets/issues/6782/labels{/name} | ### Describe the bug
Operations that save an image from a path into parquet are very slow.
I believe the reason for this is that the image data (`numpy`) is converted into `pyarrow` format but then back to python using `.pylist()` before being converted to a numpy array again.
`pylist` is already slow but used on a multi-dimensional numpy array such as an image it takes a very long time.
From the trace below we can see that `__arrow_array__` takes a long time.
It is currently also called in `get_inferred_type`, this should be removable #6781 but doesn't change the underyling issue.
The conversion to `pyarrow` and back also leads to the `numpy` array having type `int64` which causes a warning message because the image type excepts `uint8`.
However, originally the `numpy` image array was in `uint8`.
### Steps to reproduce the bug
```python
from PIL import Image
import numpy as np
import datasets
import cProfile
image = Image.fromarray(np.random.randint(0, 255, (2048, 2048, 3), dtype=np.uint8))
image.save("test_image.jpg")
ds = datasets.Dataset.from_dict(
{"image": ["test_image.jpg"]},
features=datasets.Features({"image": datasets.Image(decode=True)}),
)
# load as numpy array, e.g. for further processing with map
# same result as map returning numpy arrays
ds.set_format("numpy")
cProfile.run("ds.map(writer_batch_size=1, load_from_cache_file=False)", "restats")
```
```bash
Fri Apr 5 14:56:17 2024 restats
66817 function calls (64992 primitive calls) in 33.382 seconds
Ordered by: cumulative time
List reduced from 1073 to 20 due to restriction <20>
ncalls tottime percall cumtime percall filename:lineno(function)
46/1 0.000 0.000 33.382 33.382 {built-in method builtins.exec}
1 0.000 0.000 33.382 33.382 <string>:1(<module>)
1 0.000 0.000 33.382 33.382 arrow_dataset.py:594(wrapper)
1 0.000 0.000 33.382 33.382 arrow_dataset.py:551(wrapper)
1 0.000 0.000 33.379 33.379 arrow_dataset.py:2916(map)
4 0.000 0.000 33.327 8.332 arrow_dataset.py:3277(_map_single)
1 0.000 0.000 33.311 33.311 arrow_writer.py:465(write)
2 0.000 0.000 33.311 16.656 arrow_writer.py:423(write_examples_on_file)
1 0.000 0.000 33.311 33.311 arrow_writer.py:527(write_batch)
2 14.484 7.242 33.260 16.630 arrow_writer.py:161(__arrow_array__)
1 0.001 0.001 16.438 16.438 arrow_writer.py:121(get_inferred_type)
1 0.000 0.000 14.398 14.398 threading.py:637(wait)
1 0.000 0.000 14.398 14.398 threading.py:323(wait)
8 14.398 1.800 14.398 1.800 {method 'acquire' of '_thread.lock' objects}
4/2 0.000 0.000 4.337 2.169 table.py:1800(wrapper)
2 0.000 0.000 4.337 2.169 table.py:1950(cast_array_to_feature)
2 0.475 0.238 4.337 2.169 image.py:209(cast_storage)
9 2.583 0.287 2.583 0.287 {built-in method numpy.array}
2 0.000 0.000 1.284 0.642 image.py:319(encode_np_array)
2 0.000 0.000 1.246 0.623 image.py:301(image_to_bytes)
```
### Expected behavior
The `numpy` image data should be passed through as it will be directly consumed by `pillow` to convert it to bytes.
As an example one can replace `list_of_np_array_to_pyarrow_listarray(data)` in `__arrow_array__` with just `out = data` as a test.
We have to change `cast_storage` of the `Image` feature so it handles the passed through data (& if to handle type before)
```python
bytes_array = pa.array(
[encode_np_array(arr)["bytes"] if arr is not None else None for arr in storage],
type=pa.binary(),
)
```
Leading to the following:
```bash
Fri Apr 5 15:44:27 2024 restats
66419 function calls (64595 primitive calls) in 0.937 seconds
Ordered by: cumulative time
List reduced from 1023 to 20 due to restriction <20>
ncalls tottime percall cumtime percall filename:lineno(function)
47/1 0.000 0.000 0.935 0.935 {built-in method builtins.exec}
2/1 0.000 0.000 0.935 0.935 <string>:1(<module>)
2/1 0.000 0.000 0.934 0.934 arrow_dataset.py:594(wrapper)
2/1 0.000 0.000 0.934 0.934 arrow_dataset.py:551(wrapper)
2/1 0.000 0.000 0.934 0.934 arrow_dataset.py:2916(map)
4 0.000 0.000 0.933 0.233 arrow_dataset.py:3277(_map_single)
1 0.000 0.000 0.883 0.883 arrow_writer.py:466(write)
2 0.000 0.000 0.883 0.441 arrow_writer.py:424(write_examples_on_file)
1 0.000 0.000 0.882 0.882 arrow_writer.py:528(write_batch)
2 0.000 0.000 0.877 0.439 arrow_writer.py:161(__arrow_array__)
4/2 0.000 0.000 0.877 0.439 table.py:1800(wrapper)
2 0.000 0.000 0.877 0.439 table.py:1950(cast_array_to_feature)
2 0.009 0.005 0.877 0.439 image.py:209(cast_storage)
2 0.000 0.000 0.868 0.434 image.py:335(encode_np_array)
2 0.000 0.000 0.856 0.428 image.py:317(image_to_bytes)
2 0.000 0.000 0.822 0.411 Image.py:2376(save)
2 0.000 0.000 0.822 0.411 PngImagePlugin.py:1233(_save)
2 0.000 0.000 0.822 0.411 ImageFile.py:517(_save)
2 0.000 0.000 0.821 0.411 ImageFile.py:545(_encode_tile)
589 0.803 0.001 0.803 0.001 {method 'encode' of 'ImagingEncoder' objects}
```
This is of course only a test as it passes through all `numpy` arrays irrespective of if they should be an image.
Also I guess `cast_storage` is meant for casting `pyarrow` storage exclusively.
Converting to `pyarrow` array seems like a good solution as it also handles `pytorch` tensors etc., maybe there is a more efficient way to create a PIL image from a `pyarrow` array?
Not sure how this should be handled but I would be happy to help if there is a good solution.
### Environment info
- `datasets` version: 2.18.1.dev0
- Platform: Linux-6.7.11-200.fc39.x86_64-x86_64-with-glibc2.38
- Python version: 3.12.2
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2024.3.1 | 2024-04-05T21:04:43Z | 6,782 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-05T13:46:54Z | https://api.github.com/repos/huggingface/datasets/issues/6782/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6782/timeline | Map/Saving Image from external filepath extremely slow | https://api.github.com/repos/huggingface/datasets/issues/6782/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Modexus",
"id": 37351874,
"login": "Modexus",
"node_id": "MDQ6VXNlcjM3MzUxODc0",
"organizations_url": "https://api.github.com/users/Modexus/orgs",
"received_events_url": "https://api.github.com/users/Modexus/received_events",
"repos_url": "https://api.github.com/users/Modexus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Modexus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Modexus"
} | [] | null | null | NONE | null | null | I_kwDODunzps6EzdUj | [
"This may be a solution that only changes `cast_storage` of `Image`.\r\nHowever, I'm not totally sure that the assumptions hold that are made about the `ListArray`.\r\n\r\n```python\r\nelif pa.types.is_list(storage.type):\r\n from .features import Array3DExtensionType\r\n\r\n def get_shapes(arr):\r\n shape = ()\r\n while isinstance(arr, pa.ListArray):\r\n len_curr = len(arr)\r\n arr = arr.flatten()\r\n len_new = len(arr)\r\n shape = shape + (len_new // len_curr,)\r\n return shape\r\n\r\n def get_dtypes(arr):\r\n dtype = storage.type\r\n while hasattr(dtype, \"value_type\"):\r\n dtype = dtype.value_type\r\n return dtype\r\n\r\n arrays = []\r\n for i, is_null in enumerate(storage.is_null()):\r\n if not is_null.as_py():\r\n storage_part = storage.take([i])\r\n shape = get_shapes(storage_part)\r\n dtype = get_dtypes(storage_part)\r\n\r\n extension_type = Array3DExtensionType(shape=shape, dtype=str(dtype))\r\n array = pa.ExtensionArray.from_storage(extension_type, storage_part)\r\n arrays.append(array.to_numpy().squeeze(0))\r\n else:\r\n arrays.append(None)\r\n\r\n bytes_array = pa.array(\r\n [encode_np_array(arr)[\"bytes\"] if arr is not None else None for arr in arrays],\r\n type=pa.binary(),\r\n )\r\n path_array = pa.array([None] * len(storage), type=pa.string())\r\n storage = pa.StructArray.from_arrays(\r\n [bytes_array, path_array], [\"bytes\", \"path\"], mask=bytes_array.is_null()\r\n )\r\n```\r\n(Edited): to handle nulls\r\n\r\nNotably this doesn't change anything about the passing through of data or other things, just in the `Image` class.\r\nSeems quite fast:\r\n```bash\r\nFri Apr 5 17:55:51 2024 restats\r\n\r\n 63818 function calls (61995 primitive calls) in 0.812 seconds\r\n\r\n Ordered by: cumulative time\r\n List reduced from 1051 to 20 due to restriction <20>\r\n\r\n ncalls tottime percall cumtime percall filename:lineno(function)\r\n 47/1 0.000 0.000 0.810 0.810 {built-in method builtins.exec}\r\n 2/1 0.000 0.000 0.810 0.810 <string>:1(<module>)\r\n 2/1 0.000 0.000 0.809 0.809 arrow_dataset.py:594(wrapper)\r\n 2/1 0.000 0.000 0.809 0.809 arrow_dataset.py:551(wrapper)\r\n 2/1 0.000 0.000 0.809 0.809 arrow_dataset.py:2916(map)\r\n 3 0.000 0.000 0.807 0.269 arrow_dataset.py:3277(_map_single)\r\n 1 0.000 0.000 0.760 0.760 arrow_writer.py:589(finalize)\r\n 1 0.000 0.000 0.760 0.760 arrow_writer.py:423(write_examples_on_file)\r\n 1 0.000 0.000 0.759 0.759 arrow_writer.py:527(write_batch)\r\n 1 0.001 0.001 0.754 0.754 arrow_writer.py:161(__arrow_array__)\r\n 2/1 0.000 0.000 0.719 0.719 table.py:1800(wrapper)\r\n 1 0.000 0.000 0.719 0.719 table.py:1950(cast_array_to_feature)\r\n 1 0.006 0.006 0.718 0.718 image.py:209(cast_storage)\r\n 1 0.000 0.000 0.451 0.451 image.py:361(encode_np_array)\r\n 1 0.000 0.000 0.444 0.444 image.py:343(image_to_bytes)\r\n 1 0.000 0.000 0.413 0.413 Image.py:2376(save)\r\n 1 0.000 0.000 0.413 0.413 PngImagePlugin.py:1233(_save)\r\n 1 0.000 0.000 0.413 0.413 ImageFile.py:517(_save)\r\n 1 0.000 0.000 0.413 0.413 ImageFile.py:545(_encode_tile)\r\n 397 0.409 0.001 0.409 0.001 {method 'encode' of 'ImagingEncoder' objects}\r\n```",
"Also encounter this problem. Has been strugging with it for a long time..."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6782/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6782 | https://github.com/huggingface/datasets/issues/6782 | false |
2,228,026,497 | https://api.github.com/repos/huggingface/datasets/issues/6781/labels{/name} | Inferring the type seems to be unnecessary given that the pyarrow array has already been created.
Because pyarrow array creation is sometimes extremely slow this doubles the time write_batch takes. | 2024-04-09T07:49:11Z | 6,781 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-04-05T13:21:05Z | https://api.github.com/repos/huggingface/datasets/issues/6781/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6781/timeline | Remove get_inferred_type from ArrowWriter write_batch | https://api.github.com/repos/huggingface/datasets/issues/6781/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Modexus",
"id": 37351874,
"login": "Modexus",
"node_id": "MDQ6VXNlcjM3MzUxODc0",
"organizations_url": "https://api.github.com/users/Modexus/orgs",
"received_events_url": "https://api.github.com/users/Modexus/received_events",
"repos_url": "https://api.github.com/users/Modexus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Modexus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Modexus"
} | [] | null | null | NONE | 2024-04-09T07:49:11Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6781.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6781",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6781.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6781"
} | PR_kwDODunzps5r2DMe | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6781). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Close in favor of #6786."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6781/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6781 | https://github.com/huggingface/datasets/pull/6781 | true |
2,226,160,096 | https://api.github.com/repos/huggingface/datasets/issues/6780/labels{/name} | Updates the `wmt_t2t` test to pin the `revision` to the version with a loading script (cc @albertvillanova).
Additionally, it replaces the occurrences of the `lhoestq/test` repo id with `hf-internal-testing/dataset_with_script` and re-enables logging checks in the `Dataset.from_sql` tests. | 2024-04-04T18:46:04Z | 6,780 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-04-04T17:45:04Z | https://api.github.com/repos/huggingface/datasets/issues/6780/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6780/timeline | Fix CI | https://api.github.com/repos/huggingface/datasets/issues/6780/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2024-04-04T18:23:34Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6780.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6780",
"merged_at": "2024-04-04T18:23:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6780.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6780"
} | PR_kwDODunzps5rvkyj | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6780). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005074 / 0.011353 (-0.006279) | 0.003395 / 0.011008 (-0.007614) | 0.062358 / 0.038508 (0.023849) | 0.031041 / 0.023109 (0.007932) | 0.244039 / 0.275898 (-0.031859) | 0.266361 / 0.323480 (-0.057119) | 0.003201 / 0.007986 (-0.004785) | 0.002609 / 0.004328 (-0.001719) | 0.049269 / 0.004250 (0.045018) | 0.045713 / 0.037052 (0.008661) | 0.264075 / 0.258489 (0.005586) | 0.295428 / 0.293841 (0.001587) | 0.027882 / 0.128546 (-0.100664) | 0.010424 / 0.075646 (-0.065222) | 0.208417 / 0.419271 (-0.210854) | 0.035728 / 0.043533 (-0.007805) | 0.246803 / 0.255139 (-0.008336) | 0.267169 / 0.283200 (-0.016031) | 0.019797 / 0.141683 (-0.121885) | 1.163299 / 1.452155 (-0.288856) | 1.196118 / 1.492716 (-0.296599) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.106091 / 0.018006 (0.088085) | 0.303970 / 0.000490 (0.303480) | 0.000219 / 0.000200 (0.000019) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017955 / 0.037411 (-0.019456) | 0.060539 / 0.014526 (0.046013) | 0.072884 / 0.176557 (-0.103673) | 0.119205 / 0.737135 (-0.617931) | 0.074072 / 0.296338 (-0.222266) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.272676 / 0.215209 (0.057467) | 2.715169 / 2.077655 (0.637514) | 1.419090 / 1.504120 (-0.085030) | 1.303903 / 1.541195 (-0.237292) | 1.311903 / 1.468490 (-0.156587) | 0.562005 / 4.584777 (-4.022772) | 2.432817 / 3.745712 (-1.312896) | 2.770599 / 5.269862 (-2.499263) | 1.723043 / 4.565676 (-2.842633) | 0.064341 / 0.424275 (-0.359934) | 0.004923 / 0.007607 (-0.002684) | 0.330507 / 0.226044 (0.104463) | 3.240829 / 2.268929 (0.971901) | 1.787638 / 55.444624 (-53.656986) | 1.522971 / 6.876477 (-5.353506) | 1.529496 / 2.142072 (-0.612576) | 0.645768 / 4.805227 (-4.159459) | 0.116405 / 6.500664 (-6.384259) | 0.041524 / 0.075469 (-0.033945) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.968515 / 1.841788 (-0.873272) | 11.628911 / 8.074308 (3.554603) | 9.495023 / 10.191392 (-0.696369) | 0.142219 / 0.680424 (-0.538204) | 0.013859 / 0.534201 (-0.520342) | 0.285727 / 0.579283 (-0.293556) | 0.276842 / 0.434364 (-0.157522) | 0.321247 / 0.540337 (-0.219090) | 0.409958 / 1.386936 (-0.976978) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005102 / 0.011353 (-0.006251) | 0.003213 / 0.011008 (-0.007796) | 0.049250 / 0.038508 (0.010742) | 0.030649 / 0.023109 (0.007540) | 0.276629 / 0.275898 (0.000731) | 0.297315 / 0.323480 (-0.026165) | 0.004198 / 0.007986 (-0.003787) | 0.002744 / 0.004328 (-0.001585) | 0.047899 / 0.004250 (0.043649) | 0.040596 / 0.037052 (0.003544) | 0.287248 / 0.258489 (0.028759) | 0.313573 / 0.293841 (0.019732) | 0.029067 / 0.128546 (-0.099480) | 0.010122 / 0.075646 (-0.065524) | 0.058869 / 0.419271 (-0.360402) | 0.033012 / 0.043533 (-0.010521) | 0.272995 / 0.255139 (0.017856) | 0.297102 / 0.283200 (0.013903) | 0.018209 / 0.141683 (-0.123474) | 1.157785 / 1.452155 (-0.294369) | 1.184999 / 1.492716 (-0.307717) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094228 / 0.018006 (0.076221) | 0.302055 / 0.000490 (0.301565) | 0.000221 / 0.000200 (0.000021) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022020 / 0.037411 (-0.015391) | 0.074970 / 0.014526 (0.060444) | 0.087682 / 0.176557 (-0.088875) | 0.126506 / 0.737135 (-0.610629) | 0.092046 / 0.296338 (-0.204293) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295634 / 0.215209 (0.080425) | 2.891554 / 2.077655 (0.813899) | 1.579963 / 1.504120 (0.075843) | 1.462924 / 1.541195 (-0.078271) | 1.463806 / 1.468490 (-0.004684) | 0.558371 / 4.584777 (-4.026406) | 2.513500 / 3.745712 (-1.232212) | 2.754146 / 5.269862 (-2.515716) | 1.762317 / 4.565676 (-2.803360) | 0.063965 / 0.424275 (-0.360310) | 0.005538 / 0.007607 (-0.002069) | 0.348114 / 0.226044 (0.122070) | 3.484558 / 2.268929 (1.215630) | 1.940002 / 55.444624 (-53.504623) | 1.658469 / 6.876477 (-5.218008) | 1.645777 / 2.142072 (-0.496295) | 0.639367 / 4.805227 (-4.165861) | 0.115605 / 6.500664 (-6.385059) | 0.040647 / 0.075469 (-0.034822) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.036002 / 1.841788 (-0.805786) | 12.286895 / 8.074308 (4.212587) | 10.146719 / 10.191392 (-0.044673) | 0.140867 / 0.680424 (-0.539557) | 0.015517 / 0.534201 (-0.518684) | 0.290126 / 0.579283 (-0.289157) | 0.298702 / 0.434364 (-0.135662) | 0.325518 / 0.540337 (-0.214819) | 0.412597 / 1.386936 (-0.974339) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c3ddb1ef00334a6f973679a51e783905fbc9ef0b \"CML watermark\")\n"
] | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6780/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6780 | https://github.com/huggingface/datasets/pull/6780 | true |
2,226,075,551 | https://api.github.com/repos/huggingface/datasets/issues/6779/labels{/name} | `diffusers` (https://github.com/huggingface/diffusers/pull/7116) and `huggingface_hub` (https://github.com/huggingface/huggingface_hub/pull/2072) also use `uv` to install their dependencies, so we can do the same here.
It seems to make the "Install dependencies" step in the `ubuntu` jobs 5-8x faster and 1.5-2x in the `windows` one.
Besides introducing `uv` in CI, this PR bumps the `tensorflow` minimal version requirement to align with Transformers and simplifies the SpaCy hashing tests (use blank language models instead of the pre-trained ones)
| 2024-04-08T13:34:01Z | 6,779 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-04-04T17:02:51Z | https://api.github.com/repos/huggingface/datasets/issues/6779/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6779/timeline | Install dependencies with `uv` in CI | https://api.github.com/repos/huggingface/datasets/issues/6779/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2024-04-08T13:27:44Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6779.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6779",
"merged_at": "2024-04-08T13:27:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6779.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6779"
} | PR_kwDODunzps5rvSA8 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6779). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005336 / 0.011353 (-0.006017) | 0.004052 / 0.011008 (-0.006956) | 0.063475 / 0.038508 (0.024967) | 0.032963 / 0.023109 (0.009854) | 0.243906 / 0.275898 (-0.031992) | 0.269048 / 0.323480 (-0.054432) | 0.003363 / 0.007986 (-0.004622) | 0.002802 / 0.004328 (-0.001527) | 0.049487 / 0.004250 (0.045236) | 0.046990 / 0.037052 (0.009938) | 0.260169 / 0.258489 (0.001680) | 0.289145 / 0.293841 (-0.004696) | 0.028030 / 0.128546 (-0.100517) | 0.010706 / 0.075646 (-0.064940) | 0.213640 / 0.419271 (-0.205632) | 0.035866 / 0.043533 (-0.007667) | 0.245106 / 0.255139 (-0.010033) | 0.269588 / 0.283200 (-0.013612) | 0.019791 / 0.141683 (-0.121892) | 1.117684 / 1.452155 (-0.334470) | 1.183389 / 1.492716 (-0.309327) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095736 / 0.018006 (0.077730) | 0.302586 / 0.000490 (0.302097) | 0.000220 / 0.000200 (0.000020) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018985 / 0.037411 (-0.018426) | 0.062097 / 0.014526 (0.047571) | 0.075617 / 0.176557 (-0.100939) | 0.120570 / 0.737135 (-0.616566) | 0.075949 / 0.296338 (-0.220390) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279597 / 0.215209 (0.064388) | 2.754319 / 2.077655 (0.676665) | 1.444147 / 1.504120 (-0.059973) | 1.328414 / 1.541195 (-0.212781) | 1.371073 / 1.468490 (-0.097417) | 0.553851 / 4.584777 (-4.030926) | 2.351694 / 3.745712 (-1.394018) | 2.860771 / 5.269862 (-2.409091) | 1.749664 / 4.565676 (-2.816013) | 0.061736 / 0.424275 (-0.362539) | 0.005073 / 0.007607 (-0.002534) | 0.329974 / 0.226044 (0.103930) | 3.300487 / 2.268929 (1.031558) | 1.812809 / 55.444624 (-53.631815) | 1.559018 / 6.876477 (-5.317458) | 1.628664 / 2.142072 (-0.513408) | 0.635757 / 4.805227 (-4.169471) | 0.116468 / 6.500664 (-6.384196) | 0.042641 / 0.075469 (-0.032828) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972048 / 1.841788 (-0.869740) | 11.952721 / 8.074308 (3.878412) | 9.754274 / 10.191392 (-0.437118) | 0.132026 / 0.680424 (-0.548398) | 0.015352 / 0.534201 (-0.518849) | 0.290574 / 0.579283 (-0.288709) | 0.275384 / 0.434364 (-0.158980) | 0.330688 / 0.540337 (-0.209650) | 0.414868 / 1.386936 (-0.972068) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005412 / 0.011353 (-0.005941) | 0.003814 / 0.011008 (-0.007194) | 0.049988 / 0.038508 (0.011480) | 0.031617 / 0.023109 (0.008507) | 0.278975 / 0.275898 (0.003077) | 0.303540 / 0.323480 (-0.019940) | 0.004265 / 0.007986 (-0.003721) | 0.002804 / 0.004328 (-0.001525) | 0.049518 / 0.004250 (0.045268) | 0.041176 / 0.037052 (0.004123) | 0.291248 / 0.258489 (0.032759) | 0.317401 / 0.293841 (0.023560) | 0.029501 / 0.128546 (-0.099045) | 0.010392 / 0.075646 (-0.065255) | 0.057906 / 0.419271 (-0.361365) | 0.033056 / 0.043533 (-0.010477) | 0.280202 / 0.255139 (0.025063) | 0.298684 / 0.283200 (0.015484) | 0.018071 / 0.141683 (-0.123612) | 1.167691 / 1.452155 (-0.284464) | 1.211322 / 1.492716 (-0.281394) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092325 / 0.018006 (0.074318) | 0.301209 / 0.000490 (0.300719) | 0.000221 / 0.000200 (0.000021) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021432 / 0.037411 (-0.015980) | 0.074556 / 0.014526 (0.060031) | 0.086049 / 0.176557 (-0.090508) | 0.125151 / 0.737135 (-0.611984) | 0.088279 / 0.296338 (-0.208059) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296755 / 0.215209 (0.081546) | 2.922650 / 2.077655 (0.844995) | 1.606031 / 1.504120 (0.101911) | 1.489692 / 1.541195 (-0.051502) | 1.530206 / 1.468490 (0.061716) | 0.577827 / 4.584777 (-4.006950) | 2.459716 / 3.745712 (-1.285997) | 2.825192 / 5.269862 (-2.444669) | 1.788110 / 4.565676 (-2.777566) | 0.064011 / 0.424275 (-0.360264) | 0.005616 / 0.007607 (-0.001991) | 0.341612 / 0.226044 (0.115568) | 3.455123 / 2.268929 (1.186194) | 1.961635 / 55.444624 (-53.482990) | 1.688107 / 6.876477 (-5.188370) | 1.725490 / 2.142072 (-0.416583) | 0.656011 / 4.805227 (-4.149216) | 0.117633 / 6.500664 (-6.383031) | 0.041386 / 0.075469 (-0.034083) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.025786 / 1.841788 (-0.816002) | 12.294598 / 8.074308 (4.220290) | 10.241136 / 10.191392 (0.049744) | 0.130577 / 0.680424 (-0.549847) | 0.016094 / 0.534201 (-0.518107) | 0.291193 / 0.579283 (-0.288090) | 0.273016 / 0.434364 (-0.161348) | 0.327553 / 0.540337 (-0.212784) | 0.418556 / 1.386936 (-0.968380) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3575036af2fd5cccff7fa60de30e2e444cf8a54e \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6779/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6779 | https://github.com/huggingface/datasets/pull/6779 | true |
2,226,040,636 | https://api.github.com/repos/huggingface/datasets/issues/6778/labels{/name} | ### Describe the bug
The `to_csv()` method does not output commas in lists. So when the Dataset is loaded back in the data structure of the column with a list is not correct.
Here's an example:
Obviously, it's not as trivial as inserting commas in the list, since its a comma-separated file. But hopefully there's a way to export the list in a way that it'll be imported by `load_dataset()` correctly.
### Steps to reproduce the bug
Here's some code to reproduce the bug:
```python
from datasets import Dataset
ds = Dataset.from_dict(
{
"pokemon": ["bulbasaur", "squirtle"],
"type": ["grass", "water"]
}
)
def ascii_to_hex(text):
return [ord(c) for c in text]
ds = ds.map(lambda x: {"int": ascii_to_hex(x['pokemon'])})
ds.to_csv('../output/temp.csv')
```
temp.csv then contains:
```
### Expected behavior
ACTUAL OUTPUT:
```
pokemon,type,int
bulbasaur,grass,[ 98 117 108 98 97 115 97 117 114]
squirtle,water,[115 113 117 105 114 116 108 101]
```
EXPECTED OUTPUT:
```
pokemon,type,int
bulbasaur,grass,[98, 117, 108, 98, 97, 115, 97, 117, 114]
squirtle,water,[115, 113, 117, 105, 114, 116, 108, 101]
```
or probably something more like this since it's a CSV file:
```
pokemon,type,int
bulbasaur,grass,"[98, 117, 108, 98, 97, 115, 97, 117, 114]"
squirtle,water,"[115, 113, 117, 105, 114, 116, 108, 101]"
```
### Environment info
### Package Version
Name: datasets
Version: 2.16.1
### Python
version: 3.10.12
### OS Info
PRETTY_NAME="Ubuntu 22.04.4 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.4 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
...
UBUNTU_CODENAME=jammy | 2024-04-08T15:24:41Z | 6,778 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-04T16:46:13Z | https://api.github.com/repos/huggingface/datasets/issues/6778/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6778/timeline | Dataset.to_csv() missing commas in columns with lists | https://api.github.com/repos/huggingface/datasets/issues/6778/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/100041276?v=4",
"events_url": "https://api.github.com/users/mpickard-dataprof/events{/privacy}",
"followers_url": "https://api.github.com/users/mpickard-dataprof/followers",
"following_url": "https://api.github.com/users/mpickard-dataprof/following{/other_user}",
"gists_url": "https://api.github.com/users/mpickard-dataprof/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mpickard-dataprof",
"id": 100041276,
"login": "mpickard-dataprof",
"node_id": "U_kgDOBfaCPA",
"organizations_url": "https://api.github.com/users/mpickard-dataprof/orgs",
"received_events_url": "https://api.github.com/users/mpickard-dataprof/received_events",
"repos_url": "https://api.github.com/users/mpickard-dataprof/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mpickard-dataprof/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mpickard-dataprof/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mpickard-dataprof"
} | [] | null | null | NONE | null | null | I_kwDODunzps6Erq88 | [
"Hello!\r\n\r\nThis is due to how pandas write numpy arrays to csv. [Source](https://stackoverflow.com/questions/54753179/to-csv-saves-np-array-as-string-instead-of-as-a-list)\r\nTo fix this, you can convert them to list yourselves.\r\n\r\n```python\r\ndf = ds.to_pandas()\r\ndf['int'] = df['int'].apply(lambda arr: list(arr))\r\ndf.to_csv(index=False, '../output/temp.csv')\r\n```\r\n\r\nI think it would be good if `datasets` would do the conversion itself, but it's a breaking change and I would wait for the greenlight from someone from HF."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6778/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6778 | https://github.com/huggingface/datasets/issues/6778 | false |
2,224,611,247 | https://api.github.com/repos/huggingface/datasets/issues/6777/labels{/name} | ### Describe the bug
Hi I have the following directory structure:
|--dataset
| |-- images
| |-- metadata1000.csv
| |-- metadata1000.jsonl
| |-- padded_images
Example of metadata1000.jsonl file
{"caption": "a drawing depicts a full shot of a black t-shirt with a triangular pattern on the front there is a white label on the left side of the triangle", "image": "images/212734.png", "gaussian_padded_image": "padded_images/p_212734.png"}
{"caption": "an eye-level full shot of a large elephant and a baby elephant standing in a watering hole on the left side is a small elephant with its head turned to the right of dry land, trees, and bushes", "image": "images/212735.png", "gaussian_padded_image": "padded_images/p_212735.png"}
.
.
.
I'm trying to use dataset = load_dataset("imagefolder", data_dir='/dataset/', split='train') to load the the dataset, however it is not able to load according to the fields in the metadata1000.jsonl .
please assist to load the data properly
also getting
```
File "/workspace/train_trans_vae.py", line 1089, in <module>
print(get_metadata_patterns('/dataset/'))
File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 499, in get_metadata_patterns
raise FileNotFoundError(f"The directory at {base_path} doesn't contain any metadata file") from None
FileNotFoundError: The directory at /dataset/ doesn't contain any metadata file
```
when trying
```
from datasets.data_files import get_metadata_patterns
print(get_metadata_patterns('/dataset/'))
```
### Steps to reproduce the bug
dataset Version: 2.18.0
make a similar jsonl and similar directory format
### Expected behavior
creates a dataset object with the column names, caption,image,gaussian_padded_image
### Environment info
dataset Version: 2.18.0 | 2024-04-05T21:14:48Z | 6,777 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-04T06:31:53Z | https://api.github.com/repos/huggingface/datasets/issues/6777/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6777/timeline | .Jsonl metadata not detected | https://api.github.com/repos/huggingface/datasets/issues/6777/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/81643693?v=4",
"events_url": "https://api.github.com/users/nighting0le01/events{/privacy}",
"followers_url": "https://api.github.com/users/nighting0le01/followers",
"following_url": "https://api.github.com/users/nighting0le01/following{/other_user}",
"gists_url": "https://api.github.com/users/nighting0le01/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nighting0le01",
"id": 81643693,
"login": "nighting0le01",
"node_id": "MDQ6VXNlcjgxNjQzNjkz",
"organizations_url": "https://api.github.com/users/nighting0le01/orgs",
"received_events_url": "https://api.github.com/users/nighting0le01/received_events",
"repos_url": "https://api.github.com/users/nighting0le01/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nighting0le01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nighting0le01/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nighting0le01"
} | [] | null | null | NONE | null | null | I_kwDODunzps6EmN-v | [
"Hi! `metadata.jsonl` (or `metadata.csv`) is the only allowed name for the `imagefolder`'s metadata files.",
"@mariosasko hey i tried with metadata.jsonl also and it still doesn't get the right columns",
"@mariosasko it says metadata.csv not found\r\n<img width=\"1150\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/81643693/3754980c-6185-4413-88fa-b499bcdd4195\">\r\n\r\ndataset = load_dataset('/dataset',metadata.csv) \r\n\r\n| workspace\r\n|| source code\r\n| dataset\r\n| |-- images\r\n| |-- metadata.csv\r\n| |-- metadata.jsonl\r\n| |-- padded_images\r\n\r\nExample of metadata.jsonl file\r\n{\"caption\": \"a drawing depicts a full shot of a black t-shirt with a triangular pattern on the front there is a white label on the left side of the triangle\", \"image\": \"images/212734.png\", \"gaussian_padded_image\": \"padded_images/p_212734.png\"}\r\n{\"caption\": \"an eye-level full shot of a large elephant and a baby elephant standing in a watering hole on the left side is a small elephant with its head turned to the right of dry land, trees, and bushes\", \"image\": \"images/212735.png\", \"gaussian_padded_image\": \"padded_images/p_212735.png\"}\r\n",
"Loading more than one image per row with `imagefolder` is not supported currently. You can subscribe to https://github.com/huggingface/datasets/issues/5760 to see when it will be.\r\n\r\nInstead, you can load the dataset with `Dataset.from_generator`:\r\n```python\r\nimport json\r\nfrom datasets import Dataset, Value, Image, Features\r\n\r\ndef gen():\r\n with open(\"./dataset/metadata.jsonl\") as f:\r\n for line in f:\r\n line = json.loads(line)\r\n yield {\"caption\": line[\"caption\"], \"image\": os.path.join(\"./dataset\", line[\"image\"], \"gaussian_padded_image\": os.path.join(\"./dataset\", line[\"gaussian_padded_image\"]))}\r\n\r\nfeatures = Features({\"caption\": Value(\"string\"), \"image\": Image(), \"gaussian_padded_image\": Image()})\r\ndataset = Dataset.from_generator(gen, features=features)\r\n```\r\n(E.g., if you want to share this dataset on the Hub, you can call `dataset.push_to_hub(...)` afterward)",
"hi Thanks for sharing this, Actually I was trying with a webdataset format of the data as well and it did'nt work. Could you share how i can create Dataset object from webdataset format of this data?"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6777/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6777 | https://github.com/huggingface/datasets/issues/6777 | false |
2,223,457,792 | https://api.github.com/repos/huggingface/datasets/issues/6775/labels{/name} | ### Describe the bug
I am trying to fine-tune llama2-7b model in GCP. The notebook I am using for this can be found [here](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_pytorch_llama2_peft_finetuning.ipynb).
When I use the dataset given in the example, the training gets successfully completed (example dataset can be found [here](https://huggingface.co/datasets/timdettmers/openassistant-guanaco)).
However when I use my own dataset which is in the same format as the example dataset, I get the below error (my dataset can be found [here](https://huggingface.co/datasets/kk2491/finetune_dataset_002)).
![image](https://github.com/huggingface/datasets/assets/38481564/47fa2de3-95e0-478b-a35f-58cbaf90427a)
I see the files are being read correctly from the logs:
![image](https://github.com/huggingface/datasets/assets/38481564/b0b6316c-2cc7-476c-9674-ca2222c8f4e3)
### Steps to reproduce the bug
1. Clone the [vertex-ai-samples](https://github.com/GoogleCloudPlatform/vertex-ai-samples) repository.
2. Run the [llama2-7b peft fine-tuning](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_pytorch_llama2_peft_finetuning.ipynb).
3. Change the dataset `kk2491/finetune_dataset_002`
### Expected behavior
The training should complete successfully, and model gets deployed to an endpoint.
### Environment info
Python version : Python 3.10.12
Dataset : https://huggingface.co/datasets/kk2491/finetune_dataset_002
| 2024-04-08T01:24:35Z | 6,775 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-03T17:06:30Z | https://api.github.com/repos/huggingface/datasets/issues/6775/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6775/timeline | IndexError: Invalid key: 0 is out of bounds for size 0 | https://api.github.com/repos/huggingface/datasets/issues/6775/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/38481564?v=4",
"events_url": "https://api.github.com/users/kk2491/events{/privacy}",
"followers_url": "https://api.github.com/users/kk2491/followers",
"following_url": "https://api.github.com/users/kk2491/following{/other_user}",
"gists_url": "https://api.github.com/users/kk2491/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kk2491",
"id": 38481564,
"login": "kk2491",
"node_id": "MDQ6VXNlcjM4NDgxNTY0",
"organizations_url": "https://api.github.com/users/kk2491/orgs",
"received_events_url": "https://api.github.com/users/kk2491/received_events",
"repos_url": "https://api.github.com/users/kk2491/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kk2491/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kk2491/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kk2491"
} | [] | null | null | NONE | null | null | I_kwDODunzps6Eh0YA | [
"Same problem.",
"Hi! You should be able to fix this by passing `remove_unused_columns=False` to the `transformers` `TrainingArguments` as explained in https://github.com/huggingface/peft/issues/1299.\r\n\r\n(I'm not familiar with Vertex AI, but I'd assume `remove_unused_columns` can be passed as a flag to the docker container) ",
"I had the same problem, but I spent a whole day trying different combination with my own dataset with the example data set and found the reason: the example data is multi-turn conversation between human and assistant, so # Humman or # Assistant appear at least twice. If your own custom data only has single turn conversation, it might end up with the same error. What you can do is repeat your single turn conversation twice in your training data (keep the key 'text' the same) and maybe it works. I guess the reason is the specific way processing the data requires and counts multi-turn only (single turn will be discarded so it ends up with no training data), but since I am using Google Vertex AI, I don't have direct access to the underlying code so that was just my guess. ",
"> Hi! You should be able to fix this by passing `remove_unused_columns=False` to the `transformers` `TrainingArguments` as explained in [huggingface/peft#1299](https://github.com/huggingface/peft/issues/1299).\r\n> \r\n> (I'm not familiar with Vertex AI, but I'd assume `remove_unused_columns` can be passed as a flag to the docker container)\r\n\r\n@mariosasko Thanks for the response and suggestion. \r\nWhen I set `remove_unused_columns` as `False` , I end up getting different error (will post the error soon). \r\nEither the Vertex-AI does not support `remove_unused_columns` or my dataset is completely wrong. \r\n\r\nThank you, \r\nKK",
"> I had the same problem, but I spent a whole day trying different combination with my own dataset with the example data set and found the reason: the example data is multi-turn conversation between human and assistant, so # Humman or # Assistant appear at least twice. If your own custom data only has single turn conversation, it might end up with the same error. What you can do is repeat your single turn conversation twice in your training data (keep the key 'text' the same) and maybe it works. I guess the reason is the specific way processing the data requires and counts multi-turn only (single turn will be discarded so it ends up with no training data), but since I am using Google Vertex AI, I don't have direct access to the underlying code so that was just my guess.\r\n\r\n@cyberyu Thanks for your suggestions. \r\nI have tried the approach you suggested, copied the same conversation in each jsonl element so every jsonl item has 2 `HUMAN` and `ASSISTANT`. \r\nHowever in my case, the issue persists. I am gonna give few more tries, and post the results here. \r\nYou can find my dataset [here](https://huggingface.co/datasets/kk2491/test/tree/main) \r\n\r\nThank you, \r\nKK ",
"> > I had the same problem, but I spent a whole day trying different combination with my own dataset with the example data set and found the reason: the example data is multi-turn conversation between human and assistant, so # Humman or # Assistant appear at least twice. If your own custom data only has single turn conversation, it might end up with the same error. What you can do is repeat your single turn conversation twice in your training data (keep the key 'text' the same) and maybe it works. I guess the reason is the specific way processing the data requires and counts multi-turn only (single turn will be discarded so it ends up with no training data), but since I am using Google Vertex AI, I don't have direct access to the underlying code so that was just my guess.\r\n> \r\n> @cyberyu Thanks for your suggestions. I have tried the approach you suggested, copied the same conversation in each jsonl element so every jsonl item has 2 `HUMAN` and `ASSISTANT`. However in my case, the issue persists. I am gonna give few more tries, and post the results here. You can find my dataset [here](https://huggingface.co/datasets/kk2491/test/tree/main)\r\n> \r\n> Thank you, KK\r\n\r\nI think another reason is your training sample length is too short. I saw a relevant report (https://discuss.huggingface.co/t/indexerror-invalid-key-16-is-out-of-bounds-for-size-0/14298/16) stating that the processing code might have a bug discarding sequence length short than max_seq_length, which is 512. Not sure the Vertex AI backend code has fixed that bug or not. So I tried to add some garbage content in your data, and extended the length longer than 512 for a single turn, and repeated twice. You can copy the following line as 5 repeated lines as your training data jsonl file of five samples (no eval or test needed, for speed up, set evaluation step to 5 and training step to 10,), and it will pass.\r\n\r\n{\"text\":\"### Human: You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You will handle customers queries and provide effective help message. Please provide response to 'Can Interplai software optimize routes for minimizing package handling and transfer times in distribution centers'? ### Assistant: Yes, Interplai software can optimize routes for distribution centers by streamlining package handling processes, minimizing transfer times between loading docks and storage areas, and optimizing warehouse layouts for efficient order fulfillment. ### Human: You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You will handle customers queries and provide effective help message. Please provide response to 'Can Interplai software optimize routes for minimizing package handling and transfer times in distribution centers'? ### Assistant: Yes, Interplai software can optimize routes for distribution centers by streamlining package handling processes, minimizing transfer times between loading docks and storage areas, and optimizing warehouse layouts for efficient order fulfillment.\"}\r\n",
"@cyberyu **Thank you so much, You saved my day (+ so many days)**. \r\nI tried the example you provided above, and the training is successfully completed in Vertex-AI (through GUI). \r\nI never thought there would be constraints on the length of the samples and also on the number of turns. \r\nI will update my complete dataset and see update here once the training is completed. \r\n\r\nThank you, \r\nKK "
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6775/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6775 | https://github.com/huggingface/datasets/issues/6775 | false |
2,222,164,316 | https://api.github.com/repos/huggingface/datasets/issues/6774/labels{/name} | ### Describe the bug
When I create a dataset, it gets stuck while generating cached data.
The image format is PNG, and it will not get stuck when the image format is jpeg.
![image](https://github.com/huggingface/datasets/assets/22740819/3b888fd8-e6d6-488f-b828-95a8f206a152)
After debugging, I know that it is because of the `pa.array` operation in [arrow_writer](https://github.com/huggingface/datasets/blob/2.13.0/src/datasets/arrow_writer.py#L553), but i don't why.
### Steps to reproduce the bug
```
from datasets import Dataset
def generator(lines):
for line in lines:
img = Image.open(open(line["url"], "rb"))
# print(img.format) # "PNG"
yield {
"image": img,
}
lines = open(dataset_path, "r")
dataset = Dataset.from_generator(
generator,
gen_kwargs={"lines": lines}
)
```
### Expected behavior
Generating split done.
### Environment info
datasets 2.13.0 | 2024-04-03T07:47:31Z | 6,774 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-03T07:47:31Z | https://api.github.com/repos/huggingface/datasets/issues/6774/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6774/timeline | Generating split is very slow when Image format is PNG | https://api.github.com/repos/huggingface/datasets/issues/6774/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/22740819?v=4",
"events_url": "https://api.github.com/users/Tramac/events{/privacy}",
"followers_url": "https://api.github.com/users/Tramac/followers",
"following_url": "https://api.github.com/users/Tramac/following{/other_user}",
"gists_url": "https://api.github.com/users/Tramac/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Tramac",
"id": 22740819,
"login": "Tramac",
"node_id": "MDQ6VXNlcjIyNzQwODE5",
"organizations_url": "https://api.github.com/users/Tramac/orgs",
"received_events_url": "https://api.github.com/users/Tramac/received_events",
"repos_url": "https://api.github.com/users/Tramac/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Tramac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tramac/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Tramac"
} | [] | null | null | NONE | null | null | I_kwDODunzps6Ec4lc | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6774/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6774 | https://github.com/huggingface/datasets/issues/6774 | false |
2,221,049,121 | https://api.github.com/repos/huggingface/datasets/issues/6773/labels{/name} | ### Describe the bug
Hi, I have a dataset on the hub [here](https://huggingface.co/datasets/manestay/borderlines). It has 1k+ downloads, which I sure is mostly just me and my colleagues working with it. It should have far fewer, since I'm using the same machine with a properly set up HF_HOME variable. However, whenever I run the below function `load_borderlines_hf`, it downloads the entire dataset from the hub and then does the other logic:
https://github.com/manestay/borderlines/blob/4e161f444661e2ebfe643f3fe149d9258d63a57d/run_gpt/lib.py#L80
Let me know what I'm doing wrong here, or if it's a bug with the `datasets` library itself. On the hub I have my data stored in CSVs, but several columns are lists, so that's why I have the code to map splitting on `;`. I looked into dataset loading scripts, but it seemed difficult to set up. I have verified that other `datasets` and `models` on my system are using the cache properly (e.g. I have a 13B parameter model and large datasets, but those are cached and don't redownload).
__EDIT: __ as pointed out in the discussion below, it may be the `map()` calls that aren't being cached properly. Supposing the `load_dataset()` retrieve from the cache, then it should be the case that the `map()` calls also retrieve from the cached output. But the `map()` commands re-execute sometimes.
### Steps to reproduce the bug
1. Copy and paste the function from [here](https://github.com/manestay/borderlines/blob/4e161f444661e2ebfe643f3fe149d9258d63a57d/run_gpt/lib.py#L80) (lines 80-100)
2. Run it in Python `load_borderlines_hf(None)`
3. It completes successfully, downloading from HF hub, then doing the mapping logic etc.
4. If you run it again after some time, it will re-download, ignoring the cache
### Expected behavior
Re-running the code, which calls `datasets.load_dataset('manestay/borderlines', 'territories')`, should use the cached version
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.14.21-150500.55.7-default-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.10.0 | 2024-04-08T18:43:45Z | 6,773 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-02T17:23:22Z | https://api.github.com/repos/huggingface/datasets/issues/6773/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6773/timeline | Dataset on Hub re-downloads every time? | https://api.github.com/repos/huggingface/datasets/issues/6773/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/9099139?v=4",
"events_url": "https://api.github.com/users/manestay/events{/privacy}",
"followers_url": "https://api.github.com/users/manestay/followers",
"following_url": "https://api.github.com/users/manestay/following{/other_user}",
"gists_url": "https://api.github.com/users/manestay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/manestay",
"id": 9099139,
"login": "manestay",
"node_id": "MDQ6VXNlcjkwOTkxMzk=",
"organizations_url": "https://api.github.com/users/manestay/orgs",
"received_events_url": "https://api.github.com/users/manestay/received_events",
"repos_url": "https://api.github.com/users/manestay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/manestay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manestay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/manestay"
} | [] | null | completed | NONE | 2024-04-08T18:43:45Z | null | I_kwDODunzps6EYoUh | [
"The caching works as expected when I try to reproduce this locally or on Colab...",
"hi @mariosasko , Thank you for checking. I also tried running this again just now, and it seems like the `load_dataset()` caches properly (though I'll double check later).\r\n\r\nI think the issue might be in the caching of the function output for `territories.map(lambda row: {'Claimants': row['Claimants'].split(';')})`. My current run re-ran this, even though I have run this many times before, and as demonstrated by loading from cache, the loaded dataset is the same.\r\n\r\nI wonder if the issue stems from using CSV output. Do you recommend changing to Parquet, and if so, is there an easy way to take the already uploaded data on the Hub and reformat?",
"This issue seems similar to https://github.com/huggingface/datasets/issues/6184 (`dill` serializes objects defined outside the `__main__` module by reference). You should be able to work around this limitation by defining the lambdas outside of `load_borderlines_hf` (as module variables) and then setting their `__module__` attribute's value to `None` to force serializing them by value, e.g., like this: \r\n```python\r\nsplit_Claimants_row = lambda row: {'Claimants': row['Claimants'].split(';')}\r\nsplit_Claimants_row.__module__ = None\r\n```",
"Thank you, I'll give this a try. Your fix makes sense to me, so this issue can be closed for now.\r\n\r\nUnrelated comment -- for \"Downloads last month\" on the hub page, I'm assuming for this project that each downloaded CSV is 1 download? The dataset consists of 51 CSVs, so I'm trying to see why it's incrementing so quickly (1125 2 days ago, 1246 right now).",
"This doc explains how we count \"Downloads last month\": https://huggingface.co/docs/hub/datasets-download-stats"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6773/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6773 | https://github.com/huggingface/datasets/issues/6773 | false |
2,220,851,533 | https://api.github.com/repos/huggingface/datasets/issues/6772/labels{/name} | Use more consistent wording in `remove_columns` to explain why it's faster than `map` and update `remove_columns`/`rename_columns` docstrings to fix in-place calls.
Reported in https://github.com/huggingface/datasets/issues/6700 | 2024-04-02T16:28:45Z | 6,772 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-04-02T15:41:28Z | https://api.github.com/repos/huggingface/datasets/issues/6772/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6772/timeline | `remove_columns`/`rename_columns` doc fixes | https://api.github.com/repos/huggingface/datasets/issues/6772/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2024-04-02T16:17:46Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6772.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6772",
"merged_at": "2024-04-02T16:17:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6772.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6772"
} | PR_kwDODunzps5rdKZ2 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6772). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005728 / 0.011353 (-0.005624) | 0.003809 / 0.011008 (-0.007199) | 0.062930 / 0.038508 (0.024422) | 0.032320 / 0.023109 (0.009211) | 0.251072 / 0.275898 (-0.024826) | 0.275397 / 0.323480 (-0.048083) | 0.003314 / 0.007986 (-0.004671) | 0.002869 / 0.004328 (-0.001460) | 0.049070 / 0.004250 (0.044819) | 0.049282 / 0.037052 (0.012229) | 0.263546 / 0.258489 (0.005057) | 0.291471 / 0.293841 (-0.002370) | 0.028462 / 0.128546 (-0.100084) | 0.010528 / 0.075646 (-0.065119) | 0.211249 / 0.419271 (-0.208023) | 0.036840 / 0.043533 (-0.006693) | 0.250038 / 0.255139 (-0.005101) | 0.268883 / 0.283200 (-0.014317) | 0.021417 / 0.141683 (-0.120266) | 1.139754 / 1.452155 (-0.312400) | 1.197319 / 1.492716 (-0.295397) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094191 / 0.018006 (0.076185) | 0.302413 / 0.000490 (0.301923) | 0.000220 / 0.000200 (0.000020) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018490 / 0.037411 (-0.018922) | 0.063361 / 0.014526 (0.048835) | 0.075854 / 0.176557 (-0.100702) | 0.121499 / 0.737135 (-0.615637) | 0.075982 / 0.296338 (-0.220356) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286030 / 0.215209 (0.070821) | 2.778487 / 2.077655 (0.700832) | 1.440963 / 1.504120 (-0.063157) | 1.326217 / 1.541195 (-0.214977) | 1.359228 / 1.468490 (-0.109262) | 0.566999 / 4.584777 (-4.017778) | 2.453344 / 3.745712 (-1.292368) | 2.841448 / 5.269862 (-2.428413) | 1.825197 / 4.565676 (-2.740479) | 0.062301 / 0.424275 (-0.361974) | 0.004948 / 0.007607 (-0.002659) | 0.334578 / 0.226044 (0.108534) | 3.302327 / 2.268929 (1.033399) | 1.799808 / 55.444624 (-53.644817) | 1.529693 / 6.876477 (-5.346783) | 1.564684 / 2.142072 (-0.577389) | 0.632891 / 4.805227 (-4.172336) | 0.116594 / 6.500664 (-6.384070) | 0.042695 / 0.075469 (-0.032774) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.999994 / 1.841788 (-0.841794) | 12.767365 / 8.074308 (4.693057) | 10.550439 / 10.191392 (0.359047) | 0.133437 / 0.680424 (-0.546986) | 0.015252 / 0.534201 (-0.518949) | 0.293285 / 0.579283 (-0.285998) | 0.274773 / 0.434364 (-0.159590) | 0.328718 / 0.540337 (-0.211619) | 0.428021 / 1.386936 (-0.958915) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005538 / 0.011353 (-0.005815) | 0.003738 / 0.011008 (-0.007271) | 0.050179 / 0.038508 (0.011671) | 0.032441 / 0.023109 (0.009332) | 0.294721 / 0.275898 (0.018823) | 0.322616 / 0.323480 (-0.000864) | 0.004255 / 0.007986 (-0.003731) | 0.002913 / 0.004328 (-0.001416) | 0.049044 / 0.004250 (0.044794) | 0.042361 / 0.037052 (0.005309) | 0.304162 / 0.258489 (0.045673) | 0.332757 / 0.293841 (0.038916) | 0.029355 / 0.128546 (-0.099191) | 0.010546 / 0.075646 (-0.065100) | 0.058213 / 0.419271 (-0.361058) | 0.032648 / 0.043533 (-0.010885) | 0.298241 / 0.255139 (0.043102) | 0.313710 / 0.283200 (0.030510) | 0.017836 / 0.141683 (-0.123847) | 1.135050 / 1.452155 (-0.317104) | 1.178277 / 1.492716 (-0.314439) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094387 / 0.018006 (0.076381) | 0.301955 / 0.000490 (0.301466) | 0.000220 / 0.000200 (0.000020) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023135 / 0.037411 (-0.014276) | 0.078109 / 0.014526 (0.063583) | 0.087519 / 0.176557 (-0.089037) | 0.127815 / 0.737135 (-0.609320) | 0.090107 / 0.296338 (-0.206231) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289149 / 0.215209 (0.073940) | 2.832354 / 2.077655 (0.754699) | 1.574003 / 1.504120 (0.069883) | 1.449190 / 1.541195 (-0.092005) | 1.465798 / 1.468490 (-0.002692) | 0.561953 / 4.584777 (-4.022824) | 2.445788 / 3.745712 (-1.299924) | 2.882453 / 5.269862 (-2.387409) | 1.813267 / 4.565676 (-2.752409) | 0.063163 / 0.424275 (-0.361112) | 0.005785 / 0.007607 (-0.001822) | 0.340125 / 0.226044 (0.114081) | 3.355370 / 2.268929 (1.086442) | 1.924226 / 55.444624 (-53.520398) | 1.643242 / 6.876477 (-5.233234) | 1.650149 / 2.142072 (-0.491924) | 0.654818 / 4.805227 (-4.150409) | 0.114968 / 6.500664 (-6.385696) | 0.042044 / 0.075469 (-0.033425) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.024867 / 1.841788 (-0.816921) | 12.656140 / 8.074308 (4.581832) | 10.927014 / 10.191392 (0.735622) | 0.155929 / 0.680424 (-0.524495) | 0.015356 / 0.534201 (-0.518845) | 0.289834 / 0.579283 (-0.289449) | 0.280889 / 0.434364 (-0.153475) | 0.331490 / 0.540337 (-0.208847) | 0.418037 / 1.386936 (-0.968899) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ad3467e9b138d1a9b87b661828a71139f4e46ece \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6772/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6772 | https://github.com/huggingface/datasets/pull/6772 | true |
2,220,131,457 | https://api.github.com/repos/huggingface/datasets/issues/6771/labels{/name} | ### Discussed in https://github.com/huggingface/datasets/discussions/6768
<div type='discussions-op-text'>
<sup>Originally posted by **RitchieP** April 1, 2024</sup>
Currently, I have a dataset hosted on Huggingface with a custom script [here](https://huggingface.co/datasets/RitchieP/VerbaLex_voice).
I'm loading my dataset as below.
```py
from datasets import load_dataset, IterableDatasetDict
dataset = IterableDatasetDict()
dataset["train"] = load_dataset("RitchieP/VerbaLex_voice", "ar", split="train", use_auth_token=True, streaming=True)
dataset["test"] = load_dataset("RitchieP/VerbaLex_voice", "ar", split="test", use_auth_token=True, streaming=True)
```
And when I try to see the data I have loaded with
```py
list(dataset["train"].take(1))
```
And it gives me this stack trace
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Cell In[2], line 1
----> 1 list(dataset["train"].take(1))
File /opt/conda/lib/python3.10/site-packages/datasets/iterable_dataset.py:1388, in IterableDataset.__iter__(self)
1385 yield formatter.format_row(pa_table)
1386 return
-> 1388 for key, example in ex_iterable:
1389 if self.features:
1390 # `IterableDataset` automatically fills missing columns with None.
1391 # This is done with `_apply_feature_types_on_example`.
1392 example = _apply_feature_types_on_example(
1393 example, self.features, token_per_repo_id=self._token_per_repo_id
1394 )
File /opt/conda/lib/python3.10/site-packages/datasets/iterable_dataset.py:1044, in TakeExamplesIterable.__iter__(self)
1043 def __iter__(self):
-> 1044 yield from islice(self.ex_iterable, self.n)
File /opt/conda/lib/python3.10/site-packages/datasets/iterable_dataset.py:234, in ExamplesIterable.__iter__(self)
233 def __iter__(self):
--> 234 yield from self.generate_examples_fn(**self.kwargs)
File ~/.cache/huggingface/modules/datasets_modules/datasets/RitchieP--VerbaLex_voice/9465eaee58383cf9d7c3e14111d7abaea56398185a641b646897d6df4e4732f7/VerbaLex_voice.py:127, in VerbaLexVoiceDataset._generate_examples(self, local_extracted_archive_paths, archives, meta_path)
125 for i, audio_archive in enumerate(archives):
126 print(audio_archive)
--> 127 for path, file in audio_archive:
128 _, filename = os.path.split(path)
129 if filename in metadata:
File /opt/conda/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:869, in _IterableFromGenerator.__iter__(self)
868 def __iter__(self):
--> 869 yield from self.generator(*self.args, **self.kwargs)
File /opt/conda/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:919, in ArchiveIterable._iter_from_urlpath(cls, urlpath, download_config)
915 @classmethod
916 def _iter_from_urlpath(
917 cls, urlpath: str, download_config: Optional[DownloadConfig] = None
918 ) -> Generator[Tuple, None, None]:
--> 919 compression = _get_extraction_protocol(urlpath, download_config=download_config)
920 # Set block_size=0 to get faster streaming
921 # (e.g. for hf:// and https:// it uses streaming Requests file-like instances)
922 with xopen(urlpath, "rb", download_config=download_config, block_size=0) as f:
File /opt/conda/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:400, in _get_extraction_protocol(urlpath, download_config)
398 urlpath, storage_options = _prepare_path_and_storage_options(urlpath, download_config=download_config)
399 try:
--> 400 with fsspec.open(urlpath, **(storage_options or {})) as f:
401 return _get_extraction_protocol_with_magic_number(f)
402 except FileNotFoundError:
File /opt/conda/lib/python3.10/site-packages/fsspec/core.py:100, in OpenFile.__enter__(self)
97 def __enter__(self):
98 mode = self.mode.replace("t", "").replace("b", "") + "b"
--> 100 f = self.fs.open(self.path, mode=mode)
102 self.fobjects = [f]
104 if self.compression is not None:
File /opt/conda/lib/python3.10/site-packages/fsspec/spec.py:1307, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)
1305 else:
1306 ac = kwargs.pop("autocommit", not self._intrans)
-> 1307 f = self._open(
1308 path,
1309 mode=mode,
1310 block_size=block_size,
1311 autocommit=ac,
1312 cache_options=cache_options,
1313 **kwargs,
1314 )
1315 if compression is not None:
1316 from fsspec.compression import compr
File /opt/conda/lib/python3.10/site-packages/fsspec/implementations/local.py:180, in LocalFileSystem._open(self, path, mode, block_size, **kwargs)
178 if self.auto_mkdir and "w" in mode:
179 self.makedirs(self._parent(path), exist_ok=True)
--> 180 return LocalFileOpener(path, mode, fs=self, **kwargs)
File /opt/conda/lib/python3.10/site-packages/fsspec/implementations/local.py:302, in LocalFileOpener.__init__(self, path, mode, autocommit, fs, compression, **kwargs)
300 self.compression = get_compression(path, compression)
301 self.blocksize = io.DEFAULT_BUFFER_SIZE
--> 302 self._open()
File /opt/conda/lib/python3.10/site-packages/fsspec/implementations/local.py:307, in LocalFileOpener._open(self)
305 if self.f is None or self.f.closed:
306 if self.autocommit or "w" not in self.mode:
--> 307 self.f = open(self.path, mode=self.mode)
308 if self.compression:
309 compress = compr[self.compression]
FileNotFoundError: [Errno 2] No such file or directory: '/kaggle/working/h'
```
After looking into the stack trace, and referring to the source codes, it looks like its trying to access a directory in the notebook's environment and I don't understand why.
Not sure if its a bug in Datasets library, so I'm opening a discussions first. Feel free to ask for more information if needed. Appreciate any help in advance!</div>
Hi, referring to the discussion title above, after further digging, I think it's an issue within the datasets library. But not quite sure where it is.
If you require any more info or actions from me, please let me know. Appreciate any help in advance! | 2024-04-04T14:22:03Z | 6,771 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-02T10:24:57Z | https://api.github.com/repos/huggingface/datasets/issues/6771/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6771/timeline | Datasets FileNotFoundError when trying to generate examples. | https://api.github.com/repos/huggingface/datasets/issues/6771/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/26197115?v=4",
"events_url": "https://api.github.com/users/RitchieP/events{/privacy}",
"followers_url": "https://api.github.com/users/RitchieP/followers",
"following_url": "https://api.github.com/users/RitchieP/following{/other_user}",
"gists_url": "https://api.github.com/users/RitchieP/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/RitchieP",
"id": 26197115,
"login": "RitchieP",
"node_id": "MDQ6VXNlcjI2MTk3MTE1",
"organizations_url": "https://api.github.com/users/RitchieP/orgs",
"received_events_url": "https://api.github.com/users/RitchieP/received_events",
"repos_url": "https://api.github.com/users/RitchieP/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/RitchieP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RitchieP/subscriptions",
"type": "User",
"url": "https://api.github.com/users/RitchieP"
} | [] | null | completed | NONE | 2024-04-04T14:22:03Z | null | I_kwDODunzps6EVISB | [
"Hi! I've opened a PR in the repo to fix this issue: https://huggingface.co/datasets/RitchieP/VerbaLex_voice/discussions/6",
"@mariosasko Thanks for the PR and help! Guess I could close the issue for now. Appreciate the help!"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6771/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6771 | https://github.com/huggingface/datasets/issues/6771 | false |
2,218,991,883 | https://api.github.com/repos/huggingface/datasets/issues/6770/labels{/name} | ### Describe the bug
`Datasets==2.18.0` is not compatible with `fsspec==2023.12.2`.
I have to downgrade fsspec to `fsspec==2023.10.0` to make `Datasets==2.18.0` work properly.
### Steps to reproduce the bug
To reproduce the bug:
1. Make sure that `Datasets==2.18.0` and `fsspec==2023.12.2`.
2. Run the following code:
```
from datasets import load_dataset
dataset = load_dataset("trec")
```
3. Then one will get the following error message:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2556, in load_dataset
builder_instance = load_dataset_builder(
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2265, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/opt/conda/lib/python3.10/site-packages/datasets/builder.py", line 371, in __init__
self.config, self.config_id = self._create_builder_config(
File "/opt/conda/lib/python3.10/site-packages/datasets/builder.py", line 620, in _create_builder_config
builder_config._resolve_data_files(
File "/opt/conda/lib/python3.10/site-packages/datasets/builder.py", line 211, in _resolve_data_files
self.data_files = self.data_files.resolve(base_path, download_config)
File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 799, in resolve
out[key] = data_files_patterns_list.resolve(base_path, download_config)
File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 752, in resolve
resolve_pattern(
File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 393, in resolve_pattern
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find 'hf://datasets/trec@65752bf53af25bc935a0dce92fb5b6c930728450/default/train/0000.parquet' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip']
```
4. Similar issue also found for the following code:
```
dataset = load_dataset("sst", "default")
```
### Expected behavior
If the dataset is loaded correctly, one will have:
```
>>> print(dataset)
DatasetDict({
train: Dataset({
features: ['text', 'coarse_label', 'fine_label'],
num_rows: 5452
})
test: Dataset({
features: ['text', 'coarse_label', 'fine_label'],
num_rows: 500
})
})
>>>
```
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-6.2.0-35-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.1
- Pandas version: 2.2.1
- `fsspec` version: 2023.12.2 | 2024-04-03T13:42:29Z | 6,770 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-04-01T20:17:48Z | https://api.github.com/repos/huggingface/datasets/issues/6770/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6770/timeline | [Bug Report] `datasets==2.18.0` is not compatible with `fsspec==2023.12.2` | https://api.github.com/repos/huggingface/datasets/issues/6770/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/19348888?v=4",
"events_url": "https://api.github.com/users/fshp971/events{/privacy}",
"followers_url": "https://api.github.com/users/fshp971/followers",
"following_url": "https://api.github.com/users/fshp971/following{/other_user}",
"gists_url": "https://api.github.com/users/fshp971/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fshp971",
"id": 19348888,
"login": "fshp971",
"node_id": "MDQ6VXNlcjE5MzQ4ODg4",
"organizations_url": "https://api.github.com/users/fshp971/orgs",
"received_events_url": "https://api.github.com/users/fshp971/received_events",
"repos_url": "https://api.github.com/users/fshp971/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fshp971/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fshp971/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fshp971"
} | [] | null | null | NONE | null | null | I_kwDODunzps6EQyEL | [
"You should be able to fix this by updating `huggingface_hub` with `pip install -U huggingface_hub`. We use this package under the hood to resolve the Hub's files."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6770/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6770 | https://github.com/huggingface/datasets/issues/6770 | false |
2,218,242,015 | https://api.github.com/repos/huggingface/datasets/issues/6769/labels{/name} | ### Feature request
Hi thanks for the library! I would like to have a huggingface Dataset, and one of its column is custom (non-serializable) Python objects. For example, a minimal code:
```
class MyClass:
pass
dataset = datasets.Dataset.from_list([
dict(a=MyClass(), b='hello'),
])
```
It gives error:
```
ArrowInvalid: Could not convert <__main__.MyClass object at 0x7a852830d050> with type MyClass: did not recognize Python value type when inferring an Arrow data type
```
I guess it is because Dataset forces to convert everything into arrow format. However, is there any ways to make the scenario work? Thanks!
### Motivation
(see above)
### Your contribution
Yes, I am happy to PR!
Cross-posted: https://discuss.huggingface.co/t/datasets-with-custom-python-objects/79050?u=fzyzcjy
EDIT: possibly related https://github.com/huggingface/datasets/issues/5766 | 2024-04-01T13:36:58Z | 6,769 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-04-01T13:18:47Z | https://api.github.com/repos/huggingface/datasets/issues/6769/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6769/timeline | (Willing to PR) Datasets with custom python objects | https://api.github.com/repos/huggingface/datasets/issues/6769/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/5236035?v=4",
"events_url": "https://api.github.com/users/fzyzcjy/events{/privacy}",
"followers_url": "https://api.github.com/users/fzyzcjy/followers",
"following_url": "https://api.github.com/users/fzyzcjy/following{/other_user}",
"gists_url": "https://api.github.com/users/fzyzcjy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fzyzcjy",
"id": 5236035,
"login": "fzyzcjy",
"node_id": "MDQ6VXNlcjUyMzYwMzU=",
"organizations_url": "https://api.github.com/users/fzyzcjy/orgs",
"received_events_url": "https://api.github.com/users/fzyzcjy/received_events",
"repos_url": "https://api.github.com/users/fzyzcjy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fzyzcjy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fzyzcjy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fzyzcjy"
} | [] | null | null | NONE | null | null | I_kwDODunzps6EN6_f | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6769/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6769 | https://github.com/huggingface/datasets/issues/6769 | false |
2,217,065,412 | https://api.github.com/repos/huggingface/datasets/issues/6767/labels{/name} | Fixed the issue #6755 on the typo mistake | 2024-04-02T14:14:02Z | 6,767 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-31T16:13:37Z | https://api.github.com/repos/huggingface/datasets/issues/6767/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6767/timeline | fixing the issue 6755(small typo) | https://api.github.com/repos/huggingface/datasets/issues/6767/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/63234112?v=4",
"events_url": "https://api.github.com/users/JINO-ROHIT/events{/privacy}",
"followers_url": "https://api.github.com/users/JINO-ROHIT/followers",
"following_url": "https://api.github.com/users/JINO-ROHIT/following{/other_user}",
"gists_url": "https://api.github.com/users/JINO-ROHIT/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JINO-ROHIT",
"id": 63234112,
"login": "JINO-ROHIT",
"node_id": "MDQ6VXNlcjYzMjM0MTEy",
"organizations_url": "https://api.github.com/users/JINO-ROHIT/orgs",
"received_events_url": "https://api.github.com/users/JINO-ROHIT/received_events",
"repos_url": "https://api.github.com/users/JINO-ROHIT/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JINO-ROHIT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JINO-ROHIT/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JINO-ROHIT"
} | [] | null | null | CONTRIBUTOR | 2024-04-02T14:01:18Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6767.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6767",
"merged_at": "2024-04-02T14:01:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6767.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6767"
} | PR_kwDODunzps5rQO9J | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6767). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005526 / 0.011353 (-0.005827) | 0.003839 / 0.011008 (-0.007169) | 0.064027 / 0.038508 (0.025519) | 0.032316 / 0.023109 (0.009206) | 0.250707 / 0.275898 (-0.025191) | 0.269222 / 0.323480 (-0.054258) | 0.004335 / 0.007986 (-0.003651) | 0.002703 / 0.004328 (-0.001626) | 0.049621 / 0.004250 (0.045370) | 0.047499 / 0.037052 (0.010446) | 0.262362 / 0.258489 (0.003873) | 0.292765 / 0.293841 (-0.001076) | 0.028661 / 0.128546 (-0.099885) | 0.010835 / 0.075646 (-0.064811) | 0.208910 / 0.419271 (-0.210362) | 0.036624 / 0.043533 (-0.006909) | 0.247448 / 0.255139 (-0.007691) | 0.270593 / 0.283200 (-0.012607) | 0.018988 / 0.141683 (-0.122695) | 1.141224 / 1.452155 (-0.310931) | 1.204944 / 1.492716 (-0.287772) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096324 / 0.018006 (0.078318) | 0.292495 / 0.000490 (0.292006) | 0.000232 / 0.000200 (0.000032) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018379 / 0.037411 (-0.019032) | 0.065216 / 0.014526 (0.050690) | 0.074071 / 0.176557 (-0.102486) | 0.120793 / 0.737135 (-0.616343) | 0.075882 / 0.296338 (-0.220456) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286354 / 0.215209 (0.071145) | 2.800766 / 2.077655 (0.723111) | 1.474126 / 1.504120 (-0.029994) | 1.358232 / 1.541195 (-0.182963) | 1.400639 / 1.468490 (-0.067851) | 0.578354 / 4.584777 (-4.006423) | 2.454441 / 3.745712 (-1.291271) | 2.927003 / 5.269862 (-2.342859) | 1.826127 / 4.565676 (-2.739550) | 0.063049 / 0.424275 (-0.361226) | 0.005010 / 0.007607 (-0.002597) | 0.342174 / 0.226044 (0.116129) | 3.415900 / 2.268929 (1.146971) | 1.854096 / 55.444624 (-53.590528) | 1.568626 / 6.876477 (-5.307851) | 1.660138 / 2.142072 (-0.481934) | 0.664059 / 4.805227 (-4.141168) | 0.120496 / 6.500664 (-6.380168) | 0.044664 / 0.075469 (-0.030805) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.988434 / 1.841788 (-0.853353) | 12.525563 / 8.074308 (4.451255) | 10.016862 / 10.191392 (-0.174530) | 0.134043 / 0.680424 (-0.546381) | 0.014349 / 0.534201 (-0.519852) | 0.287173 / 0.579283 (-0.292110) | 0.266499 / 0.434364 (-0.167865) | 0.325425 / 0.540337 (-0.214912) | 0.418772 / 1.386936 (-0.968164) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005675 / 0.011353 (-0.005678) | 0.004238 / 0.011008 (-0.006770) | 0.051048 / 0.038508 (0.012540) | 0.033428 / 0.023109 (0.010319) | 0.283406 / 0.275898 (0.007508) | 0.309321 / 0.323480 (-0.014159) | 0.004354 / 0.007986 (-0.003631) | 0.003101 / 0.004328 (-0.001228) | 0.049369 / 0.004250 (0.045119) | 0.043252 / 0.037052 (0.006200) | 0.293097 / 0.258489 (0.034608) | 0.324392 / 0.293841 (0.030551) | 0.030524 / 0.128546 (-0.098022) | 0.010977 / 0.075646 (-0.064669) | 0.058546 / 0.419271 (-0.360726) | 0.033295 / 0.043533 (-0.010238) | 0.284929 / 0.255139 (0.029790) | 0.302925 / 0.283200 (0.019726) | 0.018586 / 0.141683 (-0.123097) | 1.156552 / 1.452155 (-0.295602) | 1.208856 / 1.492716 (-0.283860) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096938 / 0.018006 (0.078932) | 0.305375 / 0.000490 (0.304886) | 0.000227 / 0.000200 (0.000027) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022658 / 0.037411 (-0.014754) | 0.078125 / 0.014526 (0.063599) | 0.087892 / 0.176557 (-0.088665) | 0.127745 / 0.737135 (-0.609390) | 0.089806 / 0.296338 (-0.206533) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292434 / 0.215209 (0.077225) | 2.862329 / 2.077655 (0.784674) | 1.607948 / 1.504120 (0.103828) | 1.487179 / 1.541195 (-0.054016) | 1.542234 / 1.468490 (0.073744) | 0.579446 / 4.584777 (-4.005331) | 2.478549 / 3.745712 (-1.267163) | 2.923493 / 5.269862 (-2.346369) | 1.833161 / 4.565676 (-2.732515) | 0.064289 / 0.424275 (-0.359986) | 0.005638 / 0.007607 (-0.001969) | 0.350111 / 0.226044 (0.124067) | 3.436035 / 2.268929 (1.167107) | 1.970592 / 55.444624 (-53.474032) | 1.717474 / 6.876477 (-5.159002) | 1.753150 / 2.142072 (-0.388922) | 0.660495 / 4.805227 (-4.144732) | 0.119302 / 6.500664 (-6.381362) | 0.042633 / 0.075469 (-0.032836) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.018761 / 1.841788 (-0.823027) | 12.859834 / 8.074308 (4.785525) | 10.547789 / 10.191392 (0.356397) | 0.131986 / 0.680424 (-0.548438) | 0.016469 / 0.534201 (-0.517732) | 0.288585 / 0.579283 (-0.290698) | 0.270499 / 0.434364 (-0.163865) | 0.325801 / 0.540337 (-0.214537) | 0.416551 / 1.386936 (-0.970385) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7599f15537b094bfd18de5af7bb2a482c06d7a0e \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6767/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6767 | https://github.com/huggingface/datasets/pull/6767 | true |
2,215,933,515 | https://api.github.com/repos/huggingface/datasets/issues/6765/labels{/name} | ### Describe the bug
Here is the full error stack when installing:
```
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
datasets 2.18.0 requires fsspec[http]<=2024.2.0,>=2023.1.0, but you have fsspec 2024.3.1 which is incompatible.
Successfully installed aiobotocore-2.12.1 aioitertools-0.11.0 botocore-1.34.51 fsspec-2024.3.1 jmespath-1.0.1 s3fs-2024.3.1 urllib3-2.0.7 wrapt-1.16.0
```
When I install with pip, pip allows this error to exist while still installing s3fs, but this error breaks poetry, since poetry will refuse to install s3fs because of the dependency conflict.
Maybe I'm missing something so maybe it's not a bug but some mistake on my end? Any input would be helpful. Thanks!
### Steps to reproduce the bug
1. conda create -n tmp python=3.10 -y
2. conda activate tmp
3. pip install datasets
4. pip install s3fs
### Expected behavior
I would expect there to be no error.
### Environment info
MacOS (ARM), Python3.10, conda 23.11.0. | 2024-04-03T14:33:12Z | 6,765 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-29T19:57:24Z | https://api.github.com/repos/huggingface/datasets/issues/6765/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6765/timeline | Compatibility issue between s3fs, fsspec, and datasets | https://api.github.com/repos/huggingface/datasets/issues/6765/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/33383515?v=4",
"events_url": "https://api.github.com/users/njbrake/events{/privacy}",
"followers_url": "https://api.github.com/users/njbrake/followers",
"following_url": "https://api.github.com/users/njbrake/following{/other_user}",
"gists_url": "https://api.github.com/users/njbrake/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/njbrake",
"id": 33383515,
"login": "njbrake",
"node_id": "MDQ6VXNlcjMzMzgzNTE1",
"organizations_url": "https://api.github.com/users/njbrake/orgs",
"received_events_url": "https://api.github.com/users/njbrake/received_events",
"repos_url": "https://api.github.com/users/njbrake/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/njbrake/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/njbrake/subscriptions",
"type": "User",
"url": "https://api.github.com/users/njbrake"
} | [] | null | completed | NONE | 2024-04-03T14:33:12Z | null | I_kwDODunzps6EFHZL | [
"Hi! Instead of running `pip install` separately for each package, you should pass all the packages to a single `pip install` call (in this case, `pip install datasets s3fs`) to let `pip` properly resolve their versions.",
"> Hi! Instead of running `pip install` separately for each package, you should pass all the packages to a single `pip install` call (in this case, `pip install datasets s3fs`) to let `pip` properly resolve their versions.\r\n\r\nThanks so much! My inexperience with pip is showing 😆 🙈 "
] | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6765/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6765 | https://github.com/huggingface/datasets/issues/6765 | false |
2,215,767,119 | https://api.github.com/repos/huggingface/datasets/issues/6764/labels{/name} | ### Feature request
Enable the `load_dataset` function to load local datasets with symbolic links.
E.g, this dataset can be loaded:
├── example_dataset/
│ ├── data/
│ │ ├── train/
│ │ │ ├── file0
│ │ │ ├── file1
│ │ ├── dev/
│ │ │ ├── file2
│ │ │ ├── file3
│ ├── metadata.csv
while this dataset can't:
├── example_dataset_symlink/
│ ├── data/
│ │ ├── train/
│ │ │ ├── sym0 -> file0
│ │ │ ├── sym1 -> file1
│ │ ├── dev/
│ │ │ ├── sym2 -> file2
│ │ │ ├── sym3 -> file3
│ ├── metadata.csv
I have created an example dataset in order to reproduce the problem:
1. Unzip `example_dataset.zip`.
2. Run `no_symlink.sh`. Training should start without issues.
3. Run `symlink.sh`. You will see that all four examples will be in train split, instead of having two examples in train and two examples in dev. The script won't load the correct audio files.
[example_dataset.zip](https://github.com/huggingface/datasets/files/14807053/example_dataset.zip)
### Motivation
I have a very large dataset locally. Instead of initiating training on the entire dataset, I need to start training on smaller subsets of the data. Due to the purpose of the experiments I am running, I will need to create many smaller datasets with overlapping data. Instead of copying the all the files for each subset, I would prefer copying symbolic links of the data. This way, the memory usage would not significantly increase beyond the initial dataset size.
Advantages of this approach:
- It would leave a smaller memory footprint on the hard drive
- Creating smaller datasets would be much faster
### Your contribution
I would gladly contribute, if this is something useful to the community. It seems like a simple change of code, something like `file_path = os.path.realpath(file_path)` should be added before loading the files. If anyone has insights on how to incorporate this functionality, I would greatly appreciate your knowledge and input. | 2024-03-29T17:52:27Z | 6,764 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-03-29T17:49:28Z | https://api.github.com/repos/huggingface/datasets/issues/6764/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6764/timeline | load_dataset can't work with symbolic links | https://api.github.com/repos/huggingface/datasets/issues/6764/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/13640533?v=4",
"events_url": "https://api.github.com/users/VladimirVincan/events{/privacy}",
"followers_url": "https://api.github.com/users/VladimirVincan/followers",
"following_url": "https://api.github.com/users/VladimirVincan/following{/other_user}",
"gists_url": "https://api.github.com/users/VladimirVincan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VladimirVincan",
"id": 13640533,
"login": "VladimirVincan",
"node_id": "MDQ6VXNlcjEzNjQwNTMz",
"organizations_url": "https://api.github.com/users/VladimirVincan/orgs",
"received_events_url": "https://api.github.com/users/VladimirVincan/received_events",
"repos_url": "https://api.github.com/users/VladimirVincan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VladimirVincan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VladimirVincan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VladimirVincan"
} | [] | null | null | NONE | null | null | I_kwDODunzps6EEexP | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6764/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6764 | https://github.com/huggingface/datasets/issues/6764 | false |
2,213,440,804 | https://api.github.com/repos/huggingface/datasets/issues/6763/labels{/name} | When a dataset with upper-cases in its name is first loaded using `load_dataset()`, the local cache directory is created with all lowercase letters.
However, upon subsequent loads, the current version attempts to locate the cache directory using the dataset's original name, which includes uppercase letters. This discrepancy can lead to confusion and, particularly in offline mode, results in errors.
### Reproduce
```bash
~$ python
Python 3.9.19 (main, Mar 21 2024, 17:11:28)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
>>> dataset = load_dataset("locuslab/TOFU", "full")
>>> quit()
~$ export HF_DATASETS_OFFLINE=1
~$ python
Python 3.9.19 (main, Mar 21 2024, 17:11:28)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
>>> dataset = load_dataset("locuslab/TOFU", "full")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "xxxxxx/anaconda3/envs/llm/lib/python3.9/site-packages/datasets/load.py", line 2556, in load_dataset
builder_instance = load_dataset_builder(
File "xxxxxx/anaconda3/envs/llm/lib/python3.9/site-packages/datasets/load.py", line 2228, in load_dataset_builder
dataset_module = dataset_module_factory(
File "xxxxxx/anaconda3/envs/llm/lib/python3.9/site-packages/datasets/load.py", line 1871, in dataset_module_factory
raise ConnectionError(f"Couldn't reach the Hugging Face Hub for dataset '{path}': {e1}") from None
ConnectionError: Couldn't reach the Hugging Face Hub for dataset 'locuslab/TOFU': Offline mode is enabled.
>>>
```
I fix this issue by lowering the dataset name (`.lower()`) when generating cache_dir. | 2024-03-28T15:51:46Z | 6,763 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-28T14:52:35Z | https://api.github.com/repos/huggingface/datasets/issues/6763/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6763/timeline | Fix issue with case sensitivity when loading dataset from local cache | https://api.github.com/repos/huggingface/datasets/issues/6763/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/58537872?v=4",
"events_url": "https://api.github.com/users/Sumsky21/events{/privacy}",
"followers_url": "https://api.github.com/users/Sumsky21/followers",
"following_url": "https://api.github.com/users/Sumsky21/following{/other_user}",
"gists_url": "https://api.github.com/users/Sumsky21/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Sumsky21",
"id": 58537872,
"login": "Sumsky21",
"node_id": "MDQ6VXNlcjU4NTM3ODcy",
"organizations_url": "https://api.github.com/users/Sumsky21/orgs",
"received_events_url": "https://api.github.com/users/Sumsky21/received_events",
"repos_url": "https://api.github.com/users/Sumsky21/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Sumsky21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sumsky21/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Sumsky21"
} | [] | null | null | NONE | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6763.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6763",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6763.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6763"
} | PR_kwDODunzps5rENat | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6763/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6763 | https://github.com/huggingface/datasets/pull/6763 | true |
2,213,275,468 | https://api.github.com/repos/huggingface/datasets/issues/6762/labels{/name} | I was trying out polars as an output for a map function and found that it wasn't a valid return type in `validate_function_output`. Thought that we should accommodate this by creating and adding it to the `allowed_processed_input_types` variable. | 2024-03-29T15:44:02Z | 6,762 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-28T13:40:28Z | https://api.github.com/repos/huggingface/datasets/issues/6762/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6762/timeline | Allow polars as valid output type | https://api.github.com/repos/huggingface/datasets/issues/6762/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/11325244?v=4",
"events_url": "https://api.github.com/users/psmyth94/events{/privacy}",
"followers_url": "https://api.github.com/users/psmyth94/followers",
"following_url": "https://api.github.com/users/psmyth94/following{/other_user}",
"gists_url": "https://api.github.com/users/psmyth94/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/psmyth94",
"id": 11325244,
"login": "psmyth94",
"node_id": "MDQ6VXNlcjExMzI1MjQ0",
"organizations_url": "https://api.github.com/users/psmyth94/orgs",
"received_events_url": "https://api.github.com/users/psmyth94/received_events",
"repos_url": "https://api.github.com/users/psmyth94/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/psmyth94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/psmyth94/subscriptions",
"type": "User",
"url": "https://api.github.com/users/psmyth94"
} | [] | null | null | CONTRIBUTOR | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6762.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6762",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6762.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6762"
} | PR_kwDODunzps5rDpBe | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6762). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6762/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6762 | https://github.com/huggingface/datasets/pull/6762 | true |
2,212,805,108 | https://api.github.com/repos/huggingface/datasets/issues/6761/labels{/name} | What does this PR do?
1. remove `list_files_info` in favor of `list_repo_tree`. As of `0.23`, `list_files_info` will be removed for good. `datasets` had a utility to support both pre-0.20 and post-0.20 versions. Since `hfh` version is already pinned to `>=0.21.2`, I removed the legacy part.
2. `preupload_lfs_files` had also a different behavior between `<0.20` and `>=0.20`. I remove it since huggingface_hub is now pinned to `>=0.21.2`
3. `hf_hub_url` is overwritten to default to the dataset repo_type. I do think it is misleading to keep the same method naming for it. I renamed it to `get_dataset_url` for clarity. Let me know if you prefer to see this change reverted. | 2024-03-29T13:27:26Z | 6,761 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-28T09:57:57Z | https://api.github.com/repos/huggingface/datasets/issues/6761/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6761/timeline | Remove deprecated code | https://api.github.com/repos/huggingface/datasets/issues/6761/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Wauplin",
"id": 11801849,
"login": "Wauplin",
"node_id": "MDQ6VXNlcjExODAxODQ5",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Wauplin"
} | [] | null | null | CONTRIBUTOR | 2024-03-29T13:18:13Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6761.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6761",
"merged_at": "2024-03-29T13:18:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6761.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6761"
} | PR_kwDODunzps5rCAu8 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6761). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks for cleaning this :) I'm also fine with renaming `hf_dataset_url` (and not `get_dataset_url` as you said in your OP)",
"(Yep, `hf_dataset_url` is fine, made a mistake writing the PR description)",
"@albertvillanova Sorry about that, tests are now fixed! :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005357 / 0.011353 (-0.005995) | 0.003788 / 0.011008 (-0.007220) | 0.063630 / 0.038508 (0.025122) | 0.031353 / 0.023109 (0.008244) | 0.247525 / 0.275898 (-0.028373) | 0.282052 / 0.323480 (-0.041428) | 0.004247 / 0.007986 (-0.003739) | 0.002750 / 0.004328 (-0.001579) | 0.049467 / 0.004250 (0.045217) | 0.046663 / 0.037052 (0.009610) | 0.266440 / 0.258489 (0.007951) | 0.295230 / 0.293841 (0.001389) | 0.028271 / 0.128546 (-0.100276) | 0.011116 / 0.075646 (-0.064530) | 0.222092 / 0.419271 (-0.197179) | 0.036627 / 0.043533 (-0.006906) | 0.252607 / 0.255139 (-0.002532) | 0.271231 / 0.283200 (-0.011969) | 0.019070 / 0.141683 (-0.122613) | 1.152645 / 1.452155 (-0.299509) | 1.211267 / 1.492716 (-0.281449) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095002 / 0.018006 (0.076996) | 0.304054 / 0.000490 (0.303564) | 0.000212 / 0.000200 (0.000012) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018251 / 0.037411 (-0.019161) | 0.061929 / 0.014526 (0.047403) | 0.074641 / 0.176557 (-0.101916) | 0.122643 / 0.737135 (-0.614492) | 0.076744 / 0.296338 (-0.219594) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284605 / 0.215209 (0.069396) | 2.774638 / 2.077655 (0.696984) | 1.473907 / 1.504120 (-0.030213) | 1.351054 / 1.541195 (-0.190141) | 1.348840 / 1.468490 (-0.119650) | 0.576243 / 4.584777 (-4.008534) | 2.444110 / 3.745712 (-1.301602) | 2.814741 / 5.269862 (-2.455121) | 1.762666 / 4.565676 (-2.803010) | 0.063959 / 0.424275 (-0.360316) | 0.005011 / 0.007607 (-0.002596) | 0.338406 / 0.226044 (0.112361) | 3.361213 / 2.268929 (1.092284) | 1.832674 / 55.444624 (-53.611950) | 1.564229 / 6.876477 (-5.312248) | 1.570843 / 2.142072 (-0.571230) | 0.657134 / 4.805227 (-4.148093) | 0.120041 / 6.500664 (-6.380623) | 0.048594 / 0.075469 (-0.026875) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965328 / 1.841788 (-0.876460) | 11.704441 / 8.074308 (3.630133) | 9.895462 / 10.191392 (-0.295930) | 0.131913 / 0.680424 (-0.548511) | 0.015175 / 0.534201 (-0.519026) | 0.292022 / 0.579283 (-0.287261) | 0.269752 / 0.434364 (-0.164612) | 0.330453 / 0.540337 (-0.209884) | 0.421659 / 1.386936 (-0.965277) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005472 / 0.011353 (-0.005881) | 0.003809 / 0.011008 (-0.007199) | 0.049594 / 0.038508 (0.011086) | 0.031858 / 0.023109 (0.008748) | 0.277622 / 0.275898 (0.001724) | 0.296092 / 0.323480 (-0.027388) | 0.004209 / 0.007986 (-0.003777) | 0.002726 / 0.004328 (-0.001603) | 0.048057 / 0.004250 (0.043806) | 0.043317 / 0.037052 (0.006265) | 0.288371 / 0.258489 (0.029882) | 0.312847 / 0.293841 (0.019007) | 0.029110 / 0.128546 (-0.099437) | 0.010792 / 0.075646 (-0.064854) | 0.058694 / 0.419271 (-0.360577) | 0.033315 / 0.043533 (-0.010218) | 0.281225 / 0.255139 (0.026086) | 0.297044 / 0.283200 (0.013844) | 0.018897 / 0.141683 (-0.122786) | 1.156417 / 1.452155 (-0.295738) | 1.221393 / 1.492716 (-0.271323) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095065 / 0.018006 (0.077059) | 0.304107 / 0.000490 (0.303618) | 0.000213 / 0.000200 (0.000014) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021658 / 0.037411 (-0.015753) | 0.075948 / 0.014526 (0.061423) | 0.087019 / 0.176557 (-0.089537) | 0.127309 / 0.737135 (-0.609827) | 0.092251 / 0.296338 (-0.204087) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291906 / 0.215209 (0.076697) | 2.865007 / 2.077655 (0.787352) | 1.591647 / 1.504120 (0.087527) | 1.474499 / 1.541195 (-0.066696) | 1.496644 / 1.468490 (0.028154) | 0.575337 / 4.584777 (-4.009440) | 2.569426 / 3.745712 (-1.176287) | 2.872611 / 5.269862 (-2.397251) | 1.804278 / 4.565676 (-2.761399) | 0.064225 / 0.424275 (-0.360050) | 0.005574 / 0.007607 (-0.002033) | 0.347724 / 0.226044 (0.121680) | 3.426418 / 2.268929 (1.157490) | 1.966270 / 55.444624 (-53.478355) | 1.687790 / 6.876477 (-5.188686) | 1.728530 / 2.142072 (-0.413542) | 0.650251 / 4.805227 (-4.154977) | 0.118381 / 6.500664 (-6.382283) | 0.041693 / 0.075469 (-0.033776) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.014203 / 1.841788 (-0.827585) | 12.219496 / 8.074308 (4.145188) | 10.469677 / 10.191392 (0.278285) | 0.141840 / 0.680424 (-0.538584) | 0.015104 / 0.534201 (-0.519097) | 0.288453 / 0.579283 (-0.290830) | 0.287467 / 0.434364 (-0.146897) | 0.331046 / 0.540337 (-0.209292) | 0.423731 / 1.386936 (-0.963205) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#66d6242626eada79cfba4df39d99cd2bacb1cbea \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6761/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6761 | https://github.com/huggingface/datasets/pull/6761 | true |
2,212,288,122 | https://api.github.com/repos/huggingface/datasets/issues/6760/labels{/name} | ### Describe the bug
This happens with datasets-2.18.0; I downgraded the version to 2.14.6 fixing this temporarily.
```
Traceback (most recent call last):
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 2556, in load_dataset
builder_instance = load_dataset_builder(
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 2228, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 1879, in dataset_module_factory
raise e1 from None
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 1831, in dataset_module_factory
can_load_config_from_parquet_export = "DEFAULT_CONFIG_NAME" not in f.read()
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
```
### Steps to reproduce the bug
1. Using Python3.10/3.11
2. Install datasets-2.18.0
3. test with
```
from datasets import load_dataset
dataset = load_dataset("codeparrot/apps")
```
### Expected behavior
Normally it should manage to download and load the dataset without such error.
### Environment info
Ubuntu, Python3.10/3.11 | 2024-04-07T09:40:40Z | 6,760 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-28T03:44:26Z | https://api.github.com/repos/huggingface/datasets/issues/6760/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6760/timeline | Load codeparrot/apps raising UnicodeDecodeError in datasets-2.18.0 | https://api.github.com/repos/huggingface/datasets/issues/6760/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/17897916?v=4",
"events_url": "https://api.github.com/users/yucc-leon/events{/privacy}",
"followers_url": "https://api.github.com/users/yucc-leon/followers",
"following_url": "https://api.github.com/users/yucc-leon/following{/other_user}",
"gists_url": "https://api.github.com/users/yucc-leon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yucc-leon",
"id": 17897916,
"login": "yucc-leon",
"node_id": "MDQ6VXNlcjE3ODk3OTE2",
"organizations_url": "https://api.github.com/users/yucc-leon/orgs",
"received_events_url": "https://api.github.com/users/yucc-leon/received_events",
"repos_url": "https://api.github.com/users/yucc-leon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yucc-leon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yucc-leon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yucc-leon"
} | [] | null | null | NONE | null | null | I_kwDODunzps6D3NZ6 | [
"The same error with mteb datasets.",
"Unfortunately, I'm unable to reproduce this error locally or on Colab.",
"Here is the requirements.txt from a clean virtual environment (managed by conda) where I only install `datasets` by \r\n`pip install datasets`. \r\nThe pip list:\r\n```\r\naiohttp==3.9.3\r\naiosignal==1.3.1\r\nattrs==23.2.0\r\ncertifi==2024.2.2\r\ncharset-normalizer==3.3.2\r\ndatasets==2.18.0\r\ndill==0.3.8\r\nfilelock==3.13.3\r\nfrozenlist==1.4.1\r\nfsspec==2024.2.0\r\nhuggingface-hub==0.22.2\r\nidna==3.6\r\nmultidict==6.0.5\r\nmultiprocess==0.70.16\r\nnumpy==1.26.4\r\npackaging==24.0\r\npandas==2.2.1\r\npyarrow==15.0.2\r\npyarrow-hotfix==0.6\r\npython-dateutil==2.9.0.post0\r\npytz==2024.1\r\nPyYAML==6.0.1\r\nrequests==2.31.0\r\nsix==1.16.0\r\ntqdm==4.66.2\r\ntyping_extensions==4.11.0\r\ntzdata==2024.1\r\nurllib3==2.2.1\r\nxxhash==3.4.1\r\nyarl==1.9.4\r\n```\r\nAnd the error can be reproduced.\r\n\r\nDowngrading to datasets==2.14.6 changes some packages' versions:\r\n\r\n```\r\nSuccessfully installed datasets-2.14.6 dill-0.3.7 fsspec-2023.10.0 multiprocess-0.70.15\r\n```\r\nand the dataset can be downloaded and loaded. \r\n\r\nThen I upgrade the version to 2.18.0 again; now the dataset can be loaded with such a line:\r\n```Using the latest cached version of the module from /home/xxx/.cache/huggingface/modules/datasets_modules/datasets/codeparrot--apps/04ac807715d07d6e5cc580f59cdc8213cd7dc4529d0bb819cca72c9f8e8c1aa5 (last modified on Sun Apr 7 09:06:43 2024) since it couldn't be found locally at codeparrot/apps, or remotely on the Hugging Face Hub. ```\r\n\r\nSo the latest version works wrong when requesting the dataset info. \r\n\r\n**But if you cannot reproduce this, I may ignore some detailed information: I use `HF_ENDPOINT=https://hf-mirror.com` for some reason (if not use this I cannot connect to huggingface resources) and the error occurs when requesting the dataset's info card.** \r\nMaybe the error is caused by this environment variable.\r\nI'll open an issue in the author's repo now."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6760/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6760 | https://github.com/huggingface/datasets/issues/6760 | false |
2,208,892,891 | https://api.github.com/repos/huggingface/datasets/issues/6759/labels{/name} | ### Feature request
Running .map and filter functions with `num_procs` consecutively instantiates several multiprocessing pools iteratively.
As instantiating a Pool is very resource intensive it can be a bottleneck to performing iteratively filtering.
My ideas:
1. There should be an option to declare `persistent_workers` similar to pytorch DataLoader. Downside would be that would be complex to determine the correct resource allocation and deallocation of the pool. i.e. the dataset can outlive the utility of the pool.
2. Provide a pool as an argument. Downside would be the expertise required by the user. Upside, is that there is better resource management.
### Motivation
Is really slow to iteratively perform map and filter operations on a dataset.
### Your contribution
If approved I could integrate it. I would need to know what method would be most suitable to implement from the two options above. | 2024-03-26T17:35:25Z | 6,759 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-03-26T17:35:25Z | https://api.github.com/repos/huggingface/datasets/issues/6759/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6759/timeline | Persistent multi-process Pool | https://api.github.com/repos/huggingface/datasets/issues/6759/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/4337024?v=4",
"events_url": "https://api.github.com/users/fostiropoulos/events{/privacy}",
"followers_url": "https://api.github.com/users/fostiropoulos/followers",
"following_url": "https://api.github.com/users/fostiropoulos/following{/other_user}",
"gists_url": "https://api.github.com/users/fostiropoulos/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fostiropoulos",
"id": 4337024,
"login": "fostiropoulos",
"node_id": "MDQ6VXNlcjQzMzcwMjQ=",
"organizations_url": "https://api.github.com/users/fostiropoulos/orgs",
"received_events_url": "https://api.github.com/users/fostiropoulos/received_events",
"repos_url": "https://api.github.com/users/fostiropoulos/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fostiropoulos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fostiropoulos/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fostiropoulos"
} | [] | null | null | NONE | null | null | I_kwDODunzps6DqQfb | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6759/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6759 | https://github.com/huggingface/datasets/issues/6759 | false |
2,208,494,302 | https://api.github.com/repos/huggingface/datasets/issues/6758/labels{/name} | ### Describe the bug
I have a dataset that consists of a bunch of text files, each representing an example. There is an undocumented `sample_by` argument for the `TextConfig` class that is used by `Text` to decide whether to split files into lines, paragraphs or take them whole. Passing `sample_by=“document”` to `load_dataset` results in files getting split into lines regardless. I have edited `src/datasets/packaged_modules/text/text.py` for myself to switch the default and it works fine.
As a side note, the `if-else` for `sample_by` will silently load an empty dataset if someone makes a typo in the argument, which is not ideal.
### Steps to reproduce the bug
1. Prepare data as a bunch of files in a directory.
2. Load that data via `load_dataset(“text”, data_files=<data_dir>/<files_glob>, …, sample_by=“document”)`.
3. Inspect the resultant dataset — every item should have the form of `{“text”: <a line from a file>}`.
### Expected behavior
`load_dataset(“text”, data_files=<data_dir>/<files_glob>, …, sample_by=“document”)` should result in a dataset with items of the form `{“text”: <one document>}`.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.15.0-1046-nvidia-x86_64-with-glibc2.35
- Python version: 3.11.8
- `huggingface_hub` version: 0.21.4
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0 | 2024-04-08T13:42:35Z | 6,758 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-26T14:55:33Z | https://api.github.com/repos/huggingface/datasets/issues/6758/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | https://api.github.com/repos/huggingface/datasets/issues/6758/timeline | Passing `sample_by` to `load_dataset` when loading text data does not work | https://api.github.com/repos/huggingface/datasets/issues/6758/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/823693?v=4",
"events_url": "https://api.github.com/users/ntoxeg/events{/privacy}",
"followers_url": "https://api.github.com/users/ntoxeg/followers",
"following_url": "https://api.github.com/users/ntoxeg/following{/other_user}",
"gists_url": "https://api.github.com/users/ntoxeg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ntoxeg",
"id": 823693,
"login": "ntoxeg",
"node_id": "MDQ6VXNlcjgyMzY5Mw==",
"organizations_url": "https://api.github.com/users/ntoxeg/orgs",
"received_events_url": "https://api.github.com/users/ntoxeg/received_events",
"repos_url": "https://api.github.com/users/ntoxeg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ntoxeg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ntoxeg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ntoxeg"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | null | NONE | null | null | I_kwDODunzps6DovLe | [
"Thanks for reporting! We are working on a fix."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6758/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6758 | https://github.com/huggingface/datasets/issues/6758 | false |
2,206,280,340 | https://api.github.com/repos/huggingface/datasets/issues/6757/labels{/name} | Related to https://github.com/huggingface/doc-builder/pull/487 and [internal slack thread](https://huggingface.slack.com/archives/C04F8N7FQNL/p1711384899462349?thread_ts=1711041424.720769&cid=C04F8N7FQNL). There is now a `custom_container` option when building docs in CI. When set to `""` (instead of `"huggingface/transformers-doc-builder"` by default), we don't run the CI inside a container, therefore saving ~2min of download time. The plan is to test disabling the transformers container on a few "big" repo and if everything works correctly, we will stop making it the default container. More details on https://github.com/huggingface/doc-builder/pull/487.
cc @mishig25 | 2024-03-27T16:26:35Z | 6,757 | null | https://api.github.com/repos/huggingface/datasets | true | [] | 2024-03-25T17:16:11Z | https://api.github.com/repos/huggingface/datasets/issues/6757/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6757/timeline | Test disabling transformers containers in docs CI | https://api.github.com/repos/huggingface/datasets/issues/6757/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Wauplin",
"id": 11801849,
"login": "Wauplin",
"node_id": "MDQ6VXNlcjExODAxODQ5",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Wauplin"
} | [] | null | null | CONTRIBUTOR | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6757.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6757",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6757.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6757"
} | PR_kwDODunzps5qr7Li | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6757). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"On slack it was mentioned that it was actually slower for `datasets`, should we close this one or am I missing something ?",
"@lhoestq I converted to draft. Want to make some more tests and will let you know"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6757/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6757 | https://github.com/huggingface/datasets/pull/6757 | true |
2,205,557,725 | https://api.github.com/repos/huggingface/datasets/issues/6756/labels{/name} | ### Feature request
Support loading a dataset from a SQLite file
https://huggingface.co/datasets/severo/test_iris_sqlite/tree/main
### Motivation
SQLite is a popular file format.
### Your contribution
See discussion on slack: https://huggingface.slack.com/archives/C04L6P8KNQ5/p1702481859117909 (internal)
In particular: a SQLite file can contain multiple tables, which might be matched to multiple configs. Maybe the detail of splits and configs should be defined in the README YAML, or use the same format as for ZIP files: `Iris.sqlite::Iris`.
See dataset here: https://huggingface.co/datasets/severo/test_iris_sqlite
Note: should we also support DuckDB files? | 2024-03-26T16:09:32Z | 6,756 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-03-25T11:48:05Z | https://api.github.com/repos/huggingface/datasets/issues/6756/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6756/timeline | Support SQLite files? | https://api.github.com/repos/huggingface/datasets/issues/6756/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [] | null | completed | CONTRIBUTOR | 2024-03-26T16:09:32Z | null | I_kwDODunzps6DdiPd | [
"You can use `Dataset.from_sql(path_to_sql_file)` already. Though we haven't added the Sql dataset builder to the `_PACKAGED_DATASETS_MODULES` list or in `_EXTENSION_TO_MODULE` to map `.sqlite` to the Sql dataset builder\r\n\r\nThis would allow to load a dataset repository with a `.sqlite` file using `load_dataset` and enable the Dataset Viewer",
"Considering `Dataset.from_sql`'s (extremely) low usage, I don't think many users are interested in using this format for their datasets. Also, SQLite files are hard/impossible to stream efficiently and require custom logic to define splits/subsets, so IMO we shouldn't encourage people to use SQLite on the Hub.\r\n\r\n@severo Do you have some real-world examples of datasets published in this format?",
"No. Indeed, it seems better to explicitly not support sqlite"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6756/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6756 | https://github.com/huggingface/datasets/issues/6756 | false |
2,204,573,289 | https://api.github.com/repos/huggingface/datasets/issues/6755/labels{/name} | ### Describe the bug
There is a small typo on https://github.com/huggingface/datasets/blob/d5468836fe94e8be1ae093397dd43d4a2503b926/src/datasets/dataset_dict.py#L938
It should be `caching is enabled`.
### Steps to reproduce the bug
Please visit
https://github.com/huggingface/datasets/blob/d5468836fe94e8be1ae093397dd43d4a2503b926/src/datasets/dataset_dict.py#L938
### Expected behavior
`caching is enabled`
### Environment info
- `datasets` version: 2.17.1
- Platform: Linux-5.15.0-101-generic-x86_64-with-glibc2.35
- Python version: 3.11.7
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.1
- `fsspec` version: 2023.10.0 | 2024-04-02T14:01:19Z | 6,755 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | 2024-03-24T21:47:52Z | https://api.github.com/repos/huggingface/datasets/issues/6755/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/63234112?v=4",
"events_url": "https://api.github.com/users/JINO-ROHIT/events{/privacy}",
"followers_url": "https://api.github.com/users/JINO-ROHIT/followers",
"following_url": "https://api.github.com/users/JINO-ROHIT/following{/other_user}",
"gists_url": "https://api.github.com/users/JINO-ROHIT/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JINO-ROHIT",
"id": 63234112,
"login": "JINO-ROHIT",
"node_id": "MDQ6VXNlcjYzMjM0MTEy",
"organizations_url": "https://api.github.com/users/JINO-ROHIT/orgs",
"received_events_url": "https://api.github.com/users/JINO-ROHIT/received_events",
"repos_url": "https://api.github.com/users/JINO-ROHIT/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JINO-ROHIT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JINO-ROHIT/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JINO-ROHIT"
} | https://api.github.com/repos/huggingface/datasets/issues/6755/timeline | Small typo on the documentation | https://api.github.com/repos/huggingface/datasets/issues/6755/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/4337024?v=4",
"events_url": "https://api.github.com/users/fostiropoulos/events{/privacy}",
"followers_url": "https://api.github.com/users/fostiropoulos/followers",
"following_url": "https://api.github.com/users/fostiropoulos/following{/other_user}",
"gists_url": "https://api.github.com/users/fostiropoulos/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fostiropoulos",
"id": 4337024,
"login": "fostiropoulos",
"node_id": "MDQ6VXNlcjQzMzcwMjQ=",
"organizations_url": "https://api.github.com/users/fostiropoulos/orgs",
"received_events_url": "https://api.github.com/users/fostiropoulos/received_events",
"repos_url": "https://api.github.com/users/fostiropoulos/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fostiropoulos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fostiropoulos/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fostiropoulos"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/63234112?v=4",
"events_url": "https://api.github.com/users/JINO-ROHIT/events{/privacy}",
"followers_url": "https://api.github.com/users/JINO-ROHIT/followers",
"following_url": "https://api.github.com/users/JINO-ROHIT/following{/other_user}",
"gists_url": "https://api.github.com/users/JINO-ROHIT/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JINO-ROHIT",
"id": 63234112,
"login": "JINO-ROHIT",
"node_id": "MDQ6VXNlcjYzMjM0MTEy",
"organizations_url": "https://api.github.com/users/JINO-ROHIT/orgs",
"received_events_url": "https://api.github.com/users/JINO-ROHIT/received_events",
"repos_url": "https://api.github.com/users/JINO-ROHIT/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JINO-ROHIT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JINO-ROHIT/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JINO-ROHIT"
}
] | null | completed | NONE | 2024-04-02T14:01:19Z | null | I_kwDODunzps6DZx5p | [
"Thanks for reporting @fostiropoulos! I've edited your comment to fix the link to the problematic line.\r\n",
"@mariosasko can i take this up?",
"#self-assign"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6755/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6755 | https://github.com/huggingface/datasets/issues/6755 | false |
2,204,214,595 | https://api.github.com/repos/huggingface/datasets/issues/6754/labels{/name} | Fix https://github.com/huggingface/datasets/issues/6750#issuecomment-2016678729
I didn't find a guideline on how to run the tests, so i just run the following steps to make sure that this bug is fixed.
1. `python test.py`,
2. then `HF_DATASETS_OFFLINE=1 python test.py`
The `test.py` is
```
import datasets
datasets.utils.logging.set_verbosity_info()
ds = datasets.load_dataset('izhx/STS17-debug')
print(ds)
ds = datasets.load_dataset('C-MTEB/AFQMC', revision='b44c3b011063adb25877c13823db83bb193913c4')
print(ds)
```
| 2024-04-09T01:19:56Z | 6,754 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-24T06:59:15Z | https://api.github.com/repos/huggingface/datasets/issues/6754/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6754/timeline | Fix cache path to snakecase for `CachedDatasetModuleFactory` and `Cache` | https://api.github.com/repos/huggingface/datasets/issues/6754/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/26690193?v=4",
"events_url": "https://api.github.com/users/izhx/events{/privacy}",
"followers_url": "https://api.github.com/users/izhx/followers",
"following_url": "https://api.github.com/users/izhx/following{/other_user}",
"gists_url": "https://api.github.com/users/izhx/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/izhx",
"id": 26690193,
"login": "izhx",
"node_id": "MDQ6VXNlcjI2NjkwMTkz",
"organizations_url": "https://api.github.com/users/izhx/orgs",
"received_events_url": "https://api.github.com/users/izhx/received_events",
"repos_url": "https://api.github.com/users/izhx/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/izhx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/izhx/subscriptions",
"type": "User",
"url": "https://api.github.com/users/izhx"
} | [] | null | null | NONE | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6754.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6754",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6754.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6754"
} | PR_kwDODunzps5qk-nr | [
"@lhoestq hi 😃, is there something else I need to do to check this change?",
"I added two tests and passed them on my server.\r\n\r\n```\r\npytest tests/packaged_modules/test_cache.py \r\n========================================================================== test session starts ==========================================================================\r\nplatform linux -- Python 3.11.5, pytest-8.1.1, pluggy-1.4.0\r\nrootdir: /mnt/nas/datasets\r\nconfigfile: pyproject.toml\r\nplugins: xdist-3.5.0, datadir-1.5.0\r\ncollected 8 items \r\n\r\ntests/packaged_modules/test_cache.py ........ [100%]\r\n\r\n========================================================================== 8 passed in 50.71s ===========================================================================\r\n\r\n```\r\n\r\n```\r\npytest tests/test_load.py\r\n========================================================================== test session starts ==========================================================================\r\nplatform linux -- Python 3.11.5, pytest-8.1.1, pluggy-1.4.0\r\nrootdir: /mnt/nas/datasets\r\nconfigfile: pyproject.toml\r\nplugins: xdist-3.5.0, datadir-1.5.0\r\ncollected 151 items \r\n\r\ntests/test_load.py .............................................................................................................................................. [ 94%]\r\n......... [100%]\r\n\r\n...\r\n\r\n============================================================= 151 passed, 29 warnings in 578.36s (0:09:38) ==============================================================\r\n```\r\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6754). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Hi @izhx! I have also faced this issue, happy to see it already addressed, looking forward for PR merge :)"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6754/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6754 | https://github.com/huggingface/datasets/pull/6754 | true |
2,204,155,091 | https://api.github.com/repos/huggingface/datasets/issues/6753/labels{/name} | ### Describe the bug
When trying to run
```
import datasets
print(datasets.__version__)
```
It generates the following error
```
TypeError: expected string or bytes-like object
```
It looks like It cannot find the valid versions of `fsspec`
though fsspec version is fine when I checked Via command
```
import fsspec
print(fsspec.__version__)
# output: 2024.3.1
```
Detailed crash report
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[1], line 1
----> 1 import datasets
2 print(datasets.__version__)
File /opt/conda/lib/python3.10/site-packages/datasets/__init__.py:18
1 # ruff: noqa
2 # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
3 #
(...)
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
16 __version__ = "2.18.0"
---> 18 from .arrow_dataset import Dataset
19 from .arrow_reader import ReadInstruction
20 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:66
63 from multiprocess import Pool
64 from tqdm.contrib.concurrent import thread_map
---> 66 from . import config
67 from .arrow_reader import ArrowReader
68 from .arrow_writer import ArrowWriter, OptimizedTypedSequence
File /opt/conda/lib/python3.10/site-packages/datasets/config.py:41
39 # Imports
40 DILL_VERSION = version.parse(importlib.metadata.version("dill"))
---> 41 FSSPEC_VERSION = version.parse(importlib.metadata.version("fsspec"))
42 PANDAS_VERSION = version.parse(importlib.metadata.version("pandas"))
43 PYARROW_VERSION = version.parse(importlib.metadata.version("pyarrow"))
File /opt/conda/lib/python3.10/site-packages/packaging/version.py:49, in parse(version)
43 """
44 Parse the given version string and return either a :class:`Version` object
45 or a :class:`LegacyVersion` object depending on if the given version is
46 a valid PEP 440 version or a legacy version.
47 """
48 try:
---> 49 return Version(version)
50 except InvalidVersion:
51 return LegacyVersion(version)
File /opt/conda/lib/python3.10/site-packages/packaging/version.py:264, in Version.__init__(self, version)
261 def __init__(self, version: str) -> None:
262
263 # Validate the version and parse it into pieces
--> 264 match = self._regex.search(version)
265 if not match:
266 raise InvalidVersion(f"Invalid version: '{version}'")
TypeError: expected string or bytes-like object
```
### Steps to reproduce the bug
1. run `!pip install -U datasets` on kaggle
2. check datasets is installed via
```
import datasets
print(datasets.__version__)
```
### Expected behavior
Expected to print datasets version, like `2.18.0`
### Environment info
Running on Kaggle, latest enviornment , here is the notebook https://www.kaggle.com/code/jtv199/mistrial-7b-part2 | 2024-04-04T13:50:35Z | 6,753 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-24T03:01:30Z | https://api.github.com/repos/huggingface/datasets/issues/6753/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6753/timeline | Type error when importing datasets on Kaggle | https://api.github.com/repos/huggingface/datasets/issues/6753/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/18300717?v=4",
"events_url": "https://api.github.com/users/jtv199/events{/privacy}",
"followers_url": "https://api.github.com/users/jtv199/followers",
"following_url": "https://api.github.com/users/jtv199/following{/other_user}",
"gists_url": "https://api.github.com/users/jtv199/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jtv199",
"id": 18300717,
"login": "jtv199",
"node_id": "MDQ6VXNlcjE4MzAwNzE3",
"organizations_url": "https://api.github.com/users/jtv199/orgs",
"received_events_url": "https://api.github.com/users/jtv199/received_events",
"repos_url": "https://api.github.com/users/jtv199/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jtv199/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jtv199/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jtv199"
} | [] | null | completed | NONE | 2024-03-30T00:23:49Z | null | I_kwDODunzps6DYLzT | [
"I have the same problem \r\nIt seems that it only appears when you are using GPU \r\nIt seems to work fine with the 2.17 version though",
"Same here.",
"> I have the same problem\r\n> It seems that it only appears when you are using GPU\r\n> It seems to work fine with the 2.17 version though\r\n\r\nI downgraded from 2.18 to 2.17, and it works with CPU/GPU .. except now pyarrow complains\r\n\r\n```\r\n...\r\nFile /opt/conda/lib/python3.10/site-packages/pyarrow/array.pxi:830, in pyarrow.lib._PandasConvertible.to_pandas()\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/pyarrow/table.pxi:3989, in pyarrow.lib.Table._to_pandas()\r\n\r\nImportError: cannot import name table_to_blockmanager\r\n```\r\n\r\nsee also https://www.kaggle.com/competitions/pii-detection-removal-from-educational-data/discussion/487474#2722594",
"Solved for me by downgrading `!pip install -U datasets==2.16.0` Works with gpu aswell",
"I think you should remain open this issue. It works at the previous version but not the latter versions. It is possible as a bug that the maintainer could take note for.",
"> Solved for me by downgrading `!pip install -U datasets==2.16.0` Works with gpu as well\r\n\r\nVerified it's working w/ GPU if I make these 3 updates.\r\n\r\n```\r\ndatasets==2.16.0\r\nfsspec==2023.10.0\r\ngcsfs==2023.10.0\r\n```\r\n\r\nbut the issue shouldn't be closed, this is just a workaround until they get the issue with 2.18.0 resolved.\r\n\r\nSee also: https://www.kaggle.com/competitions/pii-detection-removal-from-educational-data/discussion/487474",
"> > Solved for me by downgrading `!pip install -U datasets==2.16.0` Works with gpu as well\r\n> \r\n> Verified it's working w/ GPU if I make these 3 updates.\r\n> \r\n> ```\r\n> datasets==2.16.0\r\n> fsspec==2023.10.0\r\n> gcsfs==2023.10.0\r\n> ```\r\n> \r\n> but the issue shouldn't be closed, this is just a workaround until they get the issue with 2.18.0 resolved.\r\n> \r\n> See also: https://www.kaggle.com/competitions/pii-detection-removal-from-educational-data/discussion/487474\r\n\r\nThis also works for me, thanks"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6753/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6753 | https://github.com/huggingface/datasets/issues/6753 | false |
2,204,043,839 | https://api.github.com/repos/huggingface/datasets/issues/6752/labels{/name} | ### Describe the bug
I'm loading a HuggingFace Dataset for images.
I'm running a preprocessing (map operation) step that runs a few operations, one of them being conversion to float16. The Dataset features also say that the 'img' is of type float16. Whenever I take an image from that HuggingFace Dataset instance, the type turns out to be float32.
### Steps to reproduce the bug
```python
import torchvision.transforms.v2 as transforms
from datasets import load_dataset
dataset = load_dataset('cifar10', split='test')
dataset = dataset.with_format("torch")
data_transform = transforms.Compose([transforms.Resize((32, 32)),
transforms.ToDtype(torch.float16, scale=True),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]),
])
def _preprocess(examples):
# Permutes from (BS x H x W x C) to (BS x C x H x W)
images = torch.permute(examples['img'], (0, 3, 2, 1))
examples['img'] = data_transform(images)
return examples
dataset = dataset.map(_preprocess, batched=True, batch_size=8)
```
Now at this point the dataset.features are showing float16 which is great because that's what I want.
```python
print(data_loader.features['img'])
Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float16', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None)
```
But when I try to sample an image from this dataloader; I'm getting a float32 image, when I'm expecting float16:
```python
print(next(iter(data_loader))['img'].dtype)
torch.float32
```
### Expected behavior
I'm expecting the images loaded after the transformation to stay in float16.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.9
- `huggingface_hub` version: 0.21.4
- PyArrow version: 14.0.2
- Pandas version: 2.0.3
- `fsspec` version: 2023.10.0 | 2024-03-23T20:53:56Z | 6,752 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-23T20:53:56Z | https://api.github.com/repos/huggingface/datasets/issues/6752/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6752/timeline | Precision being changed from float16 to float32 unexpectedly | https://api.github.com/repos/huggingface/datasets/issues/6752/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/21228908?v=4",
"events_url": "https://api.github.com/users/gcervantes8/events{/privacy}",
"followers_url": "https://api.github.com/users/gcervantes8/followers",
"following_url": "https://api.github.com/users/gcervantes8/following{/other_user}",
"gists_url": "https://api.github.com/users/gcervantes8/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gcervantes8",
"id": 21228908,
"login": "gcervantes8",
"node_id": "MDQ6VXNlcjIxMjI4OTA4",
"organizations_url": "https://api.github.com/users/gcervantes8/orgs",
"received_events_url": "https://api.github.com/users/gcervantes8/received_events",
"repos_url": "https://api.github.com/users/gcervantes8/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gcervantes8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gcervantes8/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gcervantes8"
} | [] | null | null | NONE | null | null | I_kwDODunzps6DXwo_ | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6752/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6752 | https://github.com/huggingface/datasets/issues/6752 | false |
2,203,951,501 | https://api.github.com/repos/huggingface/datasets/issues/6751/labels{/name} | Some functions in `streaming_download_manager.py` are not closing the file they open which lead to `Unclosed file` warnings in our code. This fixes a few of them. | 2024-03-26T00:40:57Z | 6,751 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-23T16:32:08Z | https://api.github.com/repos/huggingface/datasets/issues/6751/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6751/timeline | Use 'with' operator for some download functions | https://api.github.com/repos/huggingface/datasets/issues/6751/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/31669?v=4",
"events_url": "https://api.github.com/users/Moisan/events{/privacy}",
"followers_url": "https://api.github.com/users/Moisan/followers",
"following_url": "https://api.github.com/users/Moisan/following{/other_user}",
"gists_url": "https://api.github.com/users/Moisan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Moisan",
"id": 31669,
"login": "Moisan",
"node_id": "MDQ6VXNlcjMxNjY5",
"organizations_url": "https://api.github.com/users/Moisan/orgs",
"received_events_url": "https://api.github.com/users/Moisan/received_events",
"repos_url": "https://api.github.com/users/Moisan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Moisan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moisan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Moisan"
} | [] | null | null | NONE | 2024-03-26T00:40:57Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6751.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6751",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6751.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6751"
} | PR_kwDODunzps5qkKLH | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6751). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I was mistaken on the intent of those functions, closing the PR."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6751/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6751 | https://github.com/huggingface/datasets/pull/6751 | true |
2,203,590,658 | https://api.github.com/repos/huggingface/datasets/issues/6750/labels{/name} | ### Describe the bug
Hi all - I see that in the past a network dependency has been mistakenly introduced into `load_dataset` even for local loads. Is it possible this has happened again?
### Steps to reproduce the bug
```
>>> import datasets
>>> datasets.load_dataset("hh-rlhf")
Repo card metadata block was not found. Setting CardData to empty.
*hangs bc i'm firewalled*
````
stack trace from ctrl-c:
```
^CTraceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/load.py", line 2582, in load_dataset
builder_instance.download_and_prepare(
output_path = get_from_cache( [0/122]
File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 532, in get_from_cache
response = http_head(
File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 419, in http_head
response = _request_with_retry(
File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 304, in _request_with_retry
response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/adapters.py", line 487, in send
resp = conn.urlopen(
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connection.py", line 363, in connect
self.sock = conn = self._new_conn()
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
KeyboardInterrupt
```
### Expected behavior
loads the dataset
### Environment info
```
> pip show datasets
Name: datasets
Version: 2.18.0
```
Python 3.10.2 | 2024-04-03T06:50:42Z | 6,750 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-23T01:06:32Z | https://api.github.com/repos/huggingface/datasets/issues/6750/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6750/timeline | `load_dataset` requires a network connection for local download? | https://api.github.com/repos/huggingface/datasets/issues/6750/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/6306695?v=4",
"events_url": "https://api.github.com/users/MiroFurtado/events{/privacy}",
"followers_url": "https://api.github.com/users/MiroFurtado/followers",
"following_url": "https://api.github.com/users/MiroFurtado/following{/other_user}",
"gists_url": "https://api.github.com/users/MiroFurtado/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MiroFurtado",
"id": 6306695,
"login": "MiroFurtado",
"node_id": "MDQ6VXNlcjYzMDY2OTU=",
"organizations_url": "https://api.github.com/users/MiroFurtado/orgs",
"received_events_url": "https://api.github.com/users/MiroFurtado/received_events",
"repos_url": "https://api.github.com/users/MiroFurtado/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MiroFurtado/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MiroFurtado/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MiroFurtado"
} | [] | null | null | NONE | null | null | I_kwDODunzps6DWCAC | [
"Are you using `HF_DATASETS_OFFLINE=1` ?",
"> Are you using `HF_DATASETS_OFFLINE=1` ?\r\n\r\nThis doesn't work for me. `datasets=2.18.0`\r\n\r\n`test.py`:\r\n```\r\nimport datasets\r\n\r\ndatasets.utils.logging.set_verbosity_info()\r\n\r\nds = datasets.load_dataset('C-MTEB/AFQMC', revision='b44c3b011063adb25877c13823db83bb193913c4')\r\n\r\nprint(ds)\r\n```\r\n\r\nrun `python test.py`\r\n```\r\nGenerating dataset afqmc (/home/data/.cache/huggingface/datasets/C-MTEB___afqmc/default/0.0.0/b44c3b011063adb25877c13823db83bb193913c4)\r\nDownloading and preparing dataset afqmc/default to /home/data/.cache/huggingface/datasets/C-MTEB___afqmc/default/0.0.0/b44c3b011063adb25877c13823db83bb193913c4...\r\nDataset not on Hf google storage. Downloading and preparing it from source\r\nhf://datasets/C-MTEB/AFQMC@b44c3b011063adb25877c13823db83bb193913c4/data/validation-00000-of-00001-b8fc393b5ddedac7.parquet not found in cache or force_download set to True, downloading to /home/data/.cache/huggingface/datasets/downloads/78949f93104662359f4f3d5a2f7ec1ae37af5a5af44420a51212ea08c0be966b.incomplete\r\nDownloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240k/240k [00:01<00:00, 178kB/s]\r\nstoring hf://datasets/C-MTEB/AFQMC@b44c3b011063adb25877c13823db83bb193913c4/data/validation-00000-of-00001-b8fc393b5ddedac7.parquet in cache at /home/data/.cache/huggingface/datasets/downloads/78949f93104662359f4f3d5a2f7ec1ae37af5a5af44420a51212ea08c0be966b\r\ncreating metadata file for /home/data/.cache/huggingface/datasets/downloads/78949f93104662359f4f3d5a2f7ec1ae37af5a5af44420a51212ea08c0be966b\r\nDownloading took 0.0 min\r\nChecksum Computation took 0.0 min\r\nGenerating test split\r\nGenerating test split: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3861/3861 [00:00<00:00, 3972.00 examples/s]\r\nGenerating train split\r\nGenerating train split: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 34334/34334 [00:00<00:00, 34355.50 examples/s]\r\nGenerating validation split\r\nGenerating validation split: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4316/4316 [00:00<00:00, 4477.00 examples/s]\r\nAll the splits matched successfully.\r\nDataset afqmc downloaded and prepared to /home/data/.cache/huggingface/datasets/C-MTEB___afqmc/default/0.0.0/b44c3b011063adb25877c13823db83bb193913c4. Subsequent calls will reuse this data.\r\nDatasetDict({\r\n test: Dataset({\r\n features: ['sentence1', 'sentence2', 'score', 'idx'],\r\n num_rows: 3861\r\n })\r\n train: Dataset({\r\n features: ['sentence1', 'sentence2', 'score', 'idx'],\r\n num_rows: 34334\r\n })\r\n validation: Dataset({\r\n features: ['sentence1', 'sentence2', 'score', 'idx'],\r\n num_rows: 4316\r\n })\r\n})\r\n```\r\n\r\nThen run `HF_DATASETS_OFFLINE=1 python test.py`\r\n```\r\nTraceback (most recent call last):\r\n File \"test.py\", line 9, in <module>\r\n ds = datasets.load_dataset('C-MTEB/AFQMC', revision='b44c3b011063adb25877c13823db83bb193913c4')\r\n File \"/dev/shm/tmp_env/lib/python3.10/site-packages/datasets/load.py\", line 2556, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/dev/shm/tmp_env/lib/python3.10/site-packages/datasets/load.py\", line 2228, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"/dev/shm/tmp_env/lib/python3.10/site-packages/datasets/load.py\", line 1871, in dataset_module_factory\r\n raise ConnectionError(f\"Couldn't reach the Hugging Face Hub for dataset '{path}': {e1}\") from None\r\nConnectionError: Couldn't reach the Hugging Face Hub for dataset 'C-MTEB/AFQMC': Offline mode is enabled.\r\n```\r\n\r\n",
"I was having similar inexplicable issues.\r\n\r\nDoing this I *think* helped, but, `datasets` still *clearly* does not want to respect the cache:\r\n\r\n```python\r\npip install --upgrade datasets # now it is 2.18.0\r\nHF_DATASETS_OFFLINE=\"1\" python blah.py\r\n```\r\n\r\nOr similarly, I must spacify that env var to resuse the cache, IE, no arg to `load_dataset` helps it reuse the cache:\r\n\r\n```python\r\n\r\nimport os\r\nos.environ[\"HF_DATASETS_OFFLINE\"] = \"1\"\r\n\r\nimport logging\r\nlogging.basicConfig(level=logging.DEBUG)\r\n\r\nimport datasets\r\n# >>> datasets.__version__\r\n# '2.18.0'\r\n\r\ndatasets.utils.logging.set_verbosity_info()\r\ndata = datasets.load_dataset(\"c-s-ale/dolly-15k-instruction-alpaca-format\")\r\n```"
] | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6750/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6750 | https://github.com/huggingface/datasets/issues/6750 | false |
2,202,310,116 | https://api.github.com/repos/huggingface/datasets/issues/6749/labels{/name} | Following changes at https://github.com/fsspec/filesystem_spec/pull/1497 for `fsspec>=2024.2.0` | 2024-03-22T14:51:45Z | 6,749 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-22T11:44:11Z | https://api.github.com/repos/huggingface/datasets/issues/6749/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6749/timeline | Fix fsspec tqdm callback | https://api.github.com/repos/huggingface/datasets/issues/6749/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2024-03-22T14:45:39Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6749.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6749",
"merged_at": "2024-03-22T14:45:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6749.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6749"
} | PR_kwDODunzps5qeoSk | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6749). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005017 / 0.011353 (-0.006336) | 0.002958 / 0.011008 (-0.008050) | 0.063455 / 0.038508 (0.024946) | 0.028206 / 0.023109 (0.005096) | 0.230884 / 0.275898 (-0.045014) | 0.252688 / 0.323480 (-0.070792) | 0.002995 / 0.007986 (-0.004991) | 0.002613 / 0.004328 (-0.001716) | 0.046477 / 0.004250 (0.042226) | 0.040662 / 0.037052 (0.003609) | 0.241824 / 0.258489 (-0.016665) | 0.269063 / 0.293841 (-0.024778) | 0.027336 / 0.128546 (-0.101210) | 0.010614 / 0.075646 (-0.065032) | 0.216087 / 0.419271 (-0.203184) | 0.035667 / 0.043533 (-0.007866) | 0.238657 / 0.255139 (-0.016482) | 0.253433 / 0.283200 (-0.029767) | 0.017433 / 0.141683 (-0.124250) | 1.120856 / 1.452155 (-0.331299) | 1.157415 / 1.492716 (-0.335302) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088028 / 0.018006 (0.070022) | 0.277368 / 0.000490 (0.276878) | 0.000204 / 0.000200 (0.000004) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017956 / 0.037411 (-0.019455) | 0.061061 / 0.014526 (0.046535) | 0.073323 / 0.176557 (-0.103234) | 0.119254 / 0.737135 (-0.617881) | 0.074308 / 0.296338 (-0.222031) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285118 / 0.215209 (0.069908) | 2.785796 / 2.077655 (0.708142) | 1.476436 / 1.504120 (-0.027684) | 1.356505 / 1.541195 (-0.184690) | 1.362505 / 1.468490 (-0.105985) | 0.554064 / 4.584777 (-4.030713) | 2.395774 / 3.745712 (-1.349938) | 2.713703 / 5.269862 (-2.556159) | 1.701020 / 4.565676 (-2.864657) | 0.062370 / 0.424275 (-0.361905) | 0.004944 / 0.007607 (-0.002663) | 0.327948 / 0.226044 (0.101904) | 3.243739 / 2.268929 (0.974811) | 1.803881 / 55.444624 (-53.640743) | 1.551635 / 6.876477 (-5.324841) | 1.560627 / 2.142072 (-0.581446) | 0.628187 / 4.805227 (-4.177040) | 0.115824 / 6.500664 (-6.384840) | 0.041655 / 0.075469 (-0.033814) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.968797 / 1.841788 (-0.872991) | 11.220905 / 8.074308 (3.146597) | 9.322584 / 10.191392 (-0.868808) | 0.139629 / 0.680424 (-0.540795) | 0.013823 / 0.534201 (-0.520378) | 0.286700 / 0.579283 (-0.292583) | 0.263517 / 0.434364 (-0.170847) | 0.341264 / 0.540337 (-0.199074) | 0.418834 / 1.386936 (-0.968102) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005404 / 0.011353 (-0.005949) | 0.003630 / 0.011008 (-0.007378) | 0.048977 / 0.038508 (0.010469) | 0.029980 / 0.023109 (0.006871) | 0.274671 / 0.275898 (-0.001227) | 0.295671 / 0.323480 (-0.027808) | 0.004230 / 0.007986 (-0.003756) | 0.002656 / 0.004328 (-0.001672) | 0.048603 / 0.004250 (0.044353) | 0.044323 / 0.037052 (0.007271) | 0.286499 / 0.258489 (0.028010) | 0.313199 / 0.293841 (0.019358) | 0.030079 / 0.128546 (-0.098468) | 0.010480 / 0.075646 (-0.065166) | 0.058226 / 0.419271 (-0.361045) | 0.054920 / 0.043533 (0.011387) | 0.274921 / 0.255139 (0.019783) | 0.296559 / 0.283200 (0.013360) | 0.019164 / 0.141683 (-0.122519) | 1.154703 / 1.452155 (-0.297452) | 1.207015 / 1.492716 (-0.285701) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089368 / 0.018006 (0.071362) | 0.301196 / 0.000490 (0.300706) | 0.000208 / 0.000200 (0.000008) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021355 / 0.037411 (-0.016056) | 0.074688 / 0.014526 (0.060162) | 0.085840 / 0.176557 (-0.090716) | 0.125784 / 0.737135 (-0.611351) | 0.087103 / 0.296338 (-0.209235) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296727 / 0.215209 (0.081518) | 2.884922 / 2.077655 (0.807267) | 1.586515 / 1.504120 (0.082395) | 1.474417 / 1.541195 (-0.066777) | 1.492105 / 1.468490 (0.023615) | 0.570016 / 4.584777 (-4.014761) | 2.435760 / 3.745712 (-1.309952) | 2.657999 / 5.269862 (-2.611863) | 1.740160 / 4.565676 (-2.825516) | 0.063743 / 0.424275 (-0.360532) | 0.005048 / 0.007607 (-0.002559) | 0.341279 / 0.226044 (0.115235) | 3.396185 / 2.268929 (1.127256) | 1.952825 / 55.444624 (-53.491800) | 1.676669 / 6.876477 (-5.199808) | 1.773158 / 2.142072 (-0.368915) | 0.650664 / 4.805227 (-4.154563) | 0.116815 / 6.500664 (-6.383849) | 0.040813 / 0.075469 (-0.034656) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.999836 / 1.841788 (-0.841952) | 11.854540 / 8.074308 (3.780232) | 10.245516 / 10.191392 (0.054124) | 0.141235 / 0.680424 (-0.539189) | 0.015562 / 0.534201 (-0.518639) | 0.287556 / 0.579283 (-0.291727) | 0.274946 / 0.434364 (-0.159418) | 0.324652 / 0.540337 (-0.215685) | 0.449204 / 1.386936 (-0.937733) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ed2b406d045349dad16738985c947fe743260710 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6749/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6749 | https://github.com/huggingface/datasets/pull/6749 | true |
2,201,517,348 | https://api.github.com/repos/huggingface/datasets/issues/6748/labels{/name} | ### Describe the bug
I have loaded a dataset, and then slice first 300 samples using `:` ops, however, the resulting dataset is not expected, as the output below:
```bash
len(dataset)=1050324
len(dataset[:300])=2
len(dataset[0:300])=2
len(dataset.select(range(300)))=300
```
### Steps to reproduce the bug
load a dataset then:
```bash
dataset = load_from_disk(args.train_data_dir)
print(f"{len(dataset)=}", flush=True)
print(f"{len(dataset[:300])=}", flush=True)
print(f"{len(dataset[0:300])=}", flush=True)
print(f"{len(dataset.select(range(300)))=}", flush=True)
```
### Expected behavior
```bash
len(dataset)=1050324
len(dataset[:300])=300
len(dataset[0:300])=300
len(dataset.select(range(300)))=300
```
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- `huggingface_hub` version: 0.20.2
- PyArrow version: 10.0.1
- Pandas version: 1.5.3
- `fsspec` version: 2023.10.0 | 2024-03-22T16:43:57Z | 6,748 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-22T01:49:13Z | https://api.github.com/repos/huggingface/datasets/issues/6748/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6748/timeline | Strange slicing behavior | https://api.github.com/repos/huggingface/datasets/issues/6748/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/20135317?v=4",
"events_url": "https://api.github.com/users/Luciennnnnnn/events{/privacy}",
"followers_url": "https://api.github.com/users/Luciennnnnnn/followers",
"following_url": "https://api.github.com/users/Luciennnnnnn/following{/other_user}",
"gists_url": "https://api.github.com/users/Luciennnnnnn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Luciennnnnnn",
"id": 20135317,
"login": "Luciennnnnnn",
"node_id": "MDQ6VXNlcjIwMTM1MzE3",
"organizations_url": "https://api.github.com/users/Luciennnnnnn/orgs",
"received_events_url": "https://api.github.com/users/Luciennnnnnn/received_events",
"repos_url": "https://api.github.com/users/Luciennnnnnn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Luciennnnnnn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Luciennnnnnn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Luciennnnnnn"
} | [] | null | null | NONE | null | null | I_kwDODunzps6DOH0k | [
"As explained in the [docs](https://huggingface.co/docs/datasets/v2.18.0/en/access#slicing), slicing a `Dataset` returns a dictionary that maps its column names to their values. So, `len(dataset[:300])=2` is expected, assuming your dataset has 2 columns (the returned dict has 2 keys, but each value in the dict has 300 items).\r\n` "
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6748/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6748 | https://github.com/huggingface/datasets/issues/6748 | false |
2,201,219,384 | https://api.github.com/repos/huggingface/datasets/issues/6747/labels{/name} | There were a few fixes released recently, some DVC ecosystem packages require newer version of `fsspec`. | 2024-03-22T16:40:15Z | 6,747 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-21T21:25:49Z | https://api.github.com/repos/huggingface/datasets/issues/6747/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6747/timeline | chore(deps): bump fsspec | https://api.github.com/repos/huggingface/datasets/issues/6747/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/3659196?v=4",
"events_url": "https://api.github.com/users/shcheklein/events{/privacy}",
"followers_url": "https://api.github.com/users/shcheklein/followers",
"following_url": "https://api.github.com/users/shcheklein/following{/other_user}",
"gists_url": "https://api.github.com/users/shcheklein/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shcheklein",
"id": 3659196,
"login": "shcheklein",
"node_id": "MDQ6VXNlcjM2NTkxOTY=",
"organizations_url": "https://api.github.com/users/shcheklein/orgs",
"received_events_url": "https://api.github.com/users/shcheklein/received_events",
"repos_url": "https://api.github.com/users/shcheklein/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shcheklein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shcheklein/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shcheklein"
} | [] | null | null | CONTRIBUTOR | 2024-03-22T16:28:40Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6747.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6747",
"merged_at": "2024-03-22T16:28:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6747.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6747"
} | PR_kwDODunzps5qa5L- | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6747). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005129 / 0.011353 (-0.006224) | 0.003788 / 0.011008 (-0.007220) | 0.063456 / 0.038508 (0.024948) | 0.029079 / 0.023109 (0.005969) | 0.237228 / 0.275898 (-0.038670) | 0.260554 / 0.323480 (-0.062926) | 0.003090 / 0.007986 (-0.004895) | 0.002730 / 0.004328 (-0.001599) | 0.049040 / 0.004250 (0.044789) | 0.042432 / 0.037052 (0.005380) | 0.256954 / 0.258489 (-0.001535) | 0.285912 / 0.293841 (-0.007929) | 0.027568 / 0.128546 (-0.100978) | 0.010402 / 0.075646 (-0.065245) | 0.206773 / 0.419271 (-0.212499) | 0.035381 / 0.043533 (-0.008152) | 0.243147 / 0.255139 (-0.011992) | 0.259419 / 0.283200 (-0.023781) | 0.019503 / 0.141683 (-0.122180) | 1.145537 / 1.452155 (-0.306618) | 1.204070 / 1.492716 (-0.288646) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092298 / 0.018006 (0.074291) | 0.300042 / 0.000490 (0.299553) | 0.000236 / 0.000200 (0.000036) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018624 / 0.037411 (-0.018788) | 0.063832 / 0.014526 (0.049306) | 0.075849 / 0.176557 (-0.100707) | 0.120919 / 0.737135 (-0.616216) | 0.075878 / 0.296338 (-0.220461) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275545 / 0.215209 (0.060336) | 2.706004 / 2.077655 (0.628349) | 1.406398 / 1.504120 (-0.097722) | 1.287154 / 1.541195 (-0.254041) | 1.298278 / 1.468490 (-0.170212) | 0.559763 / 4.584777 (-4.025014) | 2.434104 / 3.745712 (-1.311608) | 2.786338 / 5.269862 (-2.483523) | 1.720951 / 4.565676 (-2.844726) | 0.062082 / 0.424275 (-0.362193) | 0.004931 / 0.007607 (-0.002676) | 0.329998 / 0.226044 (0.103954) | 3.222105 / 2.268929 (0.953176) | 1.777539 / 55.444624 (-53.667085) | 1.533845 / 6.876477 (-5.342632) | 1.520357 / 2.142072 (-0.621715) | 0.638850 / 4.805227 (-4.166377) | 0.116718 / 6.500664 (-6.383946) | 0.042215 / 0.075469 (-0.033254) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962791 / 1.841788 (-0.878997) | 11.509889 / 8.074308 (3.435581) | 9.507676 / 10.191392 (-0.683716) | 0.140780 / 0.680424 (-0.539644) | 0.014187 / 0.534201 (-0.520014) | 0.286363 / 0.579283 (-0.292920) | 0.263316 / 0.434364 (-0.171048) | 0.322099 / 0.540337 (-0.218239) | 0.415602 / 1.386936 (-0.971334) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005175 / 0.011353 (-0.006178) | 0.003631 / 0.011008 (-0.007377) | 0.050277 / 0.038508 (0.011769) | 0.031879 / 0.023109 (0.008770) | 0.269966 / 0.275898 (-0.005933) | 0.297229 / 0.323480 (-0.026251) | 0.004278 / 0.007986 (-0.003707) | 0.002936 / 0.004328 (-0.001393) | 0.048686 / 0.004250 (0.044436) | 0.044262 / 0.037052 (0.007209) | 0.284578 / 0.258489 (0.026089) | 0.313681 / 0.293841 (0.019840) | 0.029064 / 0.128546 (-0.099482) | 0.010700 / 0.075646 (-0.064946) | 0.058366 / 0.419271 (-0.360905) | 0.051341 / 0.043533 (0.007809) | 0.271262 / 0.255139 (0.016123) | 0.290791 / 0.283200 (0.007591) | 0.019044 / 0.141683 (-0.122639) | 1.149514 / 1.452155 (-0.302641) | 1.209277 / 1.492716 (-0.283439) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094879 / 0.018006 (0.076872) | 0.302196 / 0.000490 (0.301707) | 0.000217 / 0.000200 (0.000018) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021715 / 0.037411 (-0.015696) | 0.075122 / 0.014526 (0.060596) | 0.087393 / 0.176557 (-0.089164) | 0.125583 / 0.737135 (-0.611553) | 0.088722 / 0.296338 (-0.207617) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295158 / 0.215209 (0.079949) | 2.930208 / 2.077655 (0.852553) | 1.590197 / 1.504120 (0.086077) | 1.459038 / 1.541195 (-0.082156) | 1.471690 / 1.468490 (0.003200) | 0.570279 / 4.584777 (-4.014498) | 2.456971 / 3.745712 (-1.288741) | 2.675315 / 5.269862 (-2.594547) | 1.750122 / 4.565676 (-2.815554) | 0.062905 / 0.424275 (-0.361370) | 0.005118 / 0.007607 (-0.002489) | 0.344263 / 0.226044 (0.118219) | 3.472460 / 2.268929 (1.203532) | 1.931707 / 55.444624 (-53.512917) | 1.658537 / 6.876477 (-5.217939) | 1.785794 / 2.142072 (-0.356278) | 0.637149 / 4.805227 (-4.168078) | 0.115838 / 6.500664 (-6.384826) | 0.040771 / 0.075469 (-0.034698) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.002869 / 1.841788 (-0.838919) | 12.048825 / 8.074308 (3.974517) | 10.407979 / 10.191392 (0.216587) | 0.150300 / 0.680424 (-0.530124) | 0.015299 / 0.534201 (-0.518902) | 0.286277 / 0.579283 (-0.293006) | 0.312186 / 0.434364 (-0.122178) | 0.322633 / 0.540337 (-0.217704) | 0.438431 / 1.386936 (-0.948505) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d5468836fe94e8be1ae093397dd43d4a2503b926 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6747/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6747 | https://github.com/huggingface/datasets/pull/6747 | true |
2,198,993,949 | https://api.github.com/repos/huggingface/datasets/issues/6746/labels{/name} | ### Describe the bug
I encounter bug when running the example command line
```python
python main.py \
--model decapoda-research/llama-7b-hf \
--prune_method wanda \
--sparsity_ratio 0.5 \
--sparsity_type unstructured \
--save out/llama_7b/unstructured/wanda/
```
The bug occurred at these lines of code (when loading c4 dataset)
```python
traindata = load_dataset('allenai/c4', 'allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train')
valdata = load_dataset('allenai/c4', 'allenai--c4', data_files={'validation': 'en/c4-validation.00000-of-00008.json.gz'}, split='validation')
```
The error message states:
```
raise ExpectedMoreSplits(str(set(expected_splits) - set(recorded_splits)))
datasets.utils.info_utils.ExpectedMoreSplits: {'validation'}
```
### Steps to reproduce the bug
1. I encounter bug when running the example command line
### Expected behavior
The error message states:
```
raise ExpectedMoreSplits(str(set(expected_splits) - set(recorded_splits)))
datasets.utils.info_utils.ExpectedMoreSplits: {'validation'}
```
### Environment info
I'm using cuda 12.4, so I use ```pip install pytorch``` instead of conda provided in install.md
Also, I've tried another environment using the same commands in install.md, but the same bug occured | 2024-04-09T07:30:56Z | 6,746 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2024-03-21T02:53:04Z | https://api.github.com/repos/huggingface/datasets/issues/6746/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6746/timeline | ExpectedMoreSplits error when loading C4 dataset | https://api.github.com/repos/huggingface/datasets/issues/6746/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/65165345?v=4",
"events_url": "https://api.github.com/users/billwang485/events{/privacy}",
"followers_url": "https://api.github.com/users/billwang485/followers",
"following_url": "https://api.github.com/users/billwang485/following{/other_user}",
"gists_url": "https://api.github.com/users/billwang485/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/billwang485",
"id": 65165345,
"login": "billwang485",
"node_id": "MDQ6VXNlcjY1MTY1MzQ1",
"organizations_url": "https://api.github.com/users/billwang485/orgs",
"received_events_url": "https://api.github.com/users/billwang485/received_events",
"repos_url": "https://api.github.com/users/billwang485/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/billwang485/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/billwang485/subscriptions",
"type": "User",
"url": "https://api.github.com/users/billwang485"
} | [] | null | null | NONE | null | null | I_kwDODunzps6DEfwd | [
"Hi ! We updated the `allenai/c4` repository to allow people to specify which language to load easily (the the [c4 dataset page](https://huggingface.co/datasets/allenai/c4))\r\n\r\nTo fix this issue you can update `datasets` and remove the mention of the legacy configuration name \"allenai--c4\":\r\n\r\n```python\r\ntraindata = load_dataset('allenai/c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train')\r\nvaldata = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00000-of-00008.json.gz'}, split='validation')\r\n```",
"Did you solve this problem?I have the same bug.It is no use to delete \"allenai--c4\".",
"Did you solve it? I met this problem too."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6746/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6746 | https://github.com/huggingface/datasets/issues/6746 | false |
2,198,541,732 | https://api.github.com/repos/huggingface/datasets/issues/6745/labels{/name} | ### Feature request
https://github.com/bigcode-project/opt-out-v2 - opt out is not consent. kindly quit this ridiculous nonsense.
### Motivation
[EDITED: insults not tolerated]
### Your contribution
[EDITED: insults not tolerated] | 2024-03-21T12:28:04Z | 6,745 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-03-20T20:54:06Z | https://api.github.com/repos/huggingface/datasets/issues/6745/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6745/timeline | Scraping the whole of github including private repos is bad; kindly stop | https://api.github.com/repos/huggingface/datasets/issues/6745/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghost",
"id": 10137,
"login": "ghost",
"node_id": "MDQ6VXNlcjEwMTM3",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"repos_url": "https://api.github.com/users/ghost/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghost"
} | [] | null | completed | NONE | 2024-03-21T10:24:56Z | null | I_kwDODunzps6DCxWk | [
"It's not twitter here"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6745/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6745 | https://github.com/huggingface/datasets/issues/6745 | false |
2,197,910,168 | https://api.github.com/repos/huggingface/datasets/issues/6744/labels{/name} | ### Feature request
Commands such as `load_dataset` creates file locks with `filelock.FileLock`. It would be good if there was a way to disable this.
### Motivation
File locking doesn't work on all file-systems (in my case NFS mounted Weka). If the `cache_dir` only had small files then it would be possible to point to local disk and the problem would be solved. However, as cache_dir is both where the small info files are written and the processed datasets are put this isn't a feasible solution.
Considering https://github.com/huggingface/datasets/issues/6395 I still do think this is something that belongs in HuggingFace. The possibility to control packages separately is valuable. It might be that a user has their dataset on a file-system that doesn't support file-locking while they are using file locking on local disk to control some other type of access.
### Your contribution
My suggested solution:
```
diff --git a/src/datasets/utils/_filelock.py b/src/datasets/utils/_filelock.py
index 19620e6e..58f41a02 100644
--- a/src/datasets/utils/_filelock.py
+++ b/src/datasets/utils/_filelock.py
@@ -18,11 +18,15 @@
import os
from filelock import FileLock as FileLock_
-from filelock import UnixFileLock
+from filelock import SoftFileLock, UnixFileLock
from filelock import __version__ as _filelock_version
from packaging import version
+if os.getenv('HF_USE_SOFTFILELOCK', 'false').lower() in ('true', '1'):
+ FileLock_ = SoftFileLock
+
+
class FileLock(FileLock_):
"""
A `filelock.FileLock` initializer that handles long paths.
```
| 2024-03-20T15:59:45Z | 6,744 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2024-03-20T15:59:45Z | https://api.github.com/repos/huggingface/datasets/issues/6744/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6744/timeline | Option to disable file locking | https://api.github.com/repos/huggingface/datasets/issues/6744/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/35767167?v=4",
"events_url": "https://api.github.com/users/VRehnberg/events{/privacy}",
"followers_url": "https://api.github.com/users/VRehnberg/followers",
"following_url": "https://api.github.com/users/VRehnberg/following{/other_user}",
"gists_url": "https://api.github.com/users/VRehnberg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VRehnberg",
"id": 35767167,
"login": "VRehnberg",
"node_id": "MDQ6VXNlcjM1NzY3MTY3",
"organizations_url": "https://api.github.com/users/VRehnberg/orgs",
"received_events_url": "https://api.github.com/users/VRehnberg/received_events",
"repos_url": "https://api.github.com/users/VRehnberg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VRehnberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VRehnberg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VRehnberg"
} | [] | null | null | NONE | null | null | I_kwDODunzps6DAXKY | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6744/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6744 | https://github.com/huggingface/datasets/issues/6744 | false |
2,195,481,697 | https://api.github.com/repos/huggingface/datasets/issues/6743/labels{/name} | Fix #6738 | 2024-04-08T13:08:42Z | 6,743 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-19T16:54:22Z | https://api.github.com/repos/huggingface/datasets/issues/6743/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6743/timeline | Allow null values in dict columns | https://api.github.com/repos/huggingface/datasets/issues/6743/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2024-03-19T20:05:19Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6743.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6743",
"merged_at": "2024-03-19T20:05:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6743.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6743"
} | PR_kwDODunzps5qHeMZ | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6743). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005013 / 0.011353 (-0.006340) | 0.003228 / 0.011008 (-0.007780) | 0.062763 / 0.038508 (0.024255) | 0.028937 / 0.023109 (0.005828) | 0.240777 / 0.275898 (-0.035121) | 0.266972 / 0.323480 (-0.056508) | 0.003073 / 0.007986 (-0.004913) | 0.002769 / 0.004328 (-0.001560) | 0.049265 / 0.004250 (0.045015) | 0.042061 / 0.037052 (0.005009) | 0.261714 / 0.258489 (0.003225) | 0.284896 / 0.293841 (-0.008944) | 0.027717 / 0.128546 (-0.100829) | 0.010430 / 0.075646 (-0.065216) | 0.209022 / 0.419271 (-0.210249) | 0.035941 / 0.043533 (-0.007591) | 0.246849 / 0.255139 (-0.008290) | 0.263205 / 0.283200 (-0.019994) | 0.019489 / 0.141683 (-0.122193) | 1.102595 / 1.452155 (-0.349559) | 1.170493 / 1.492716 (-0.322223) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093611 / 0.018006 (0.075604) | 0.302041 / 0.000490 (0.301551) | 0.000223 / 0.000200 (0.000023) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018720 / 0.037411 (-0.018692) | 0.062199 / 0.014526 (0.047673) | 0.074888 / 0.176557 (-0.101669) | 0.120184 / 0.737135 (-0.616951) | 0.076756 / 0.296338 (-0.219583) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287484 / 0.215209 (0.072275) | 2.787777 / 2.077655 (0.710123) | 1.488957 / 1.504120 (-0.015163) | 1.362678 / 1.541195 (-0.178517) | 1.364571 / 1.468490 (-0.103919) | 0.563139 / 4.584777 (-4.021638) | 2.422224 / 3.745712 (-1.323488) | 2.798011 / 5.269862 (-2.471850) | 1.751159 / 4.565676 (-2.814517) | 0.062740 / 0.424275 (-0.361536) | 0.004918 / 0.007607 (-0.002689) | 0.338285 / 0.226044 (0.112240) | 3.316012 / 2.268929 (1.047083) | 1.845975 / 55.444624 (-53.598650) | 1.553187 / 6.876477 (-5.323290) | 1.564582 / 2.142072 (-0.577490) | 0.645987 / 4.805227 (-4.159240) | 0.118216 / 6.500664 (-6.382448) | 0.041243 / 0.075469 (-0.034226) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.970265 / 1.841788 (-0.871522) | 11.783152 / 8.074308 (3.708844) | 9.516584 / 10.191392 (-0.674808) | 0.148086 / 0.680424 (-0.532338) | 0.013689 / 0.534201 (-0.520512) | 0.289657 / 0.579283 (-0.289626) | 0.265966 / 0.434364 (-0.168398) | 0.328483 / 0.540337 (-0.211854) | 0.433544 / 1.386936 (-0.953392) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005235 / 0.011353 (-0.006118) | 0.003515 / 0.011008 (-0.007493) | 0.049484 / 0.038508 (0.010976) | 0.029264 / 0.023109 (0.006154) | 0.278518 / 0.275898 (0.002620) | 0.298948 / 0.323480 (-0.024532) | 0.004308 / 0.007986 (-0.003678) | 0.002751 / 0.004328 (-0.001577) | 0.048952 / 0.004250 (0.044701) | 0.045379 / 0.037052 (0.008327) | 0.292633 / 0.258489 (0.034144) | 0.319405 / 0.293841 (0.025564) | 0.030201 / 0.128546 (-0.098345) | 0.010657 / 0.075646 (-0.064990) | 0.057842 / 0.419271 (-0.361430) | 0.053359 / 0.043533 (0.009826) | 0.281136 / 0.255139 (0.025997) | 0.295388 / 0.283200 (0.012188) | 0.018786 / 0.141683 (-0.122897) | 1.187181 / 1.452155 (-0.264974) | 1.198394 / 1.492716 (-0.294323) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093861 / 0.018006 (0.075855) | 0.304019 / 0.000490 (0.303529) | 0.000220 / 0.000200 (0.000020) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021582 / 0.037411 (-0.015829) | 0.075381 / 0.014526 (0.060855) | 0.087886 / 0.176557 (-0.088671) | 0.125078 / 0.737135 (-0.612057) | 0.089339 / 0.296338 (-0.206999) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295797 / 0.215209 (0.080588) | 2.912021 / 2.077655 (0.834367) | 1.592191 / 1.504120 (0.088071) | 1.471270 / 1.541195 (-0.069925) | 1.475535 / 1.468490 (0.007045) | 0.564114 / 4.584777 (-4.020663) | 2.442882 / 3.745712 (-1.302830) | 2.679433 / 5.269862 (-2.590428) | 1.752097 / 4.565676 (-2.813579) | 0.062748 / 0.424275 (-0.361527) | 0.005068 / 0.007607 (-0.002539) | 0.345554 / 0.226044 (0.119509) | 3.456929 / 2.268929 (1.188000) | 1.962781 / 55.444624 (-53.481844) | 1.688313 / 6.876477 (-5.188164) | 1.817392 / 2.142072 (-0.324681) | 0.639588 / 4.805227 (-4.165639) | 0.116148 / 6.500664 (-6.384516) | 0.040851 / 0.075469 (-0.034618) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.009852 / 1.841788 (-0.831936) | 12.031749 / 8.074308 (3.957440) | 10.305107 / 10.191392 (0.113715) | 0.132960 / 0.680424 (-0.547464) | 0.014779 / 0.534201 (-0.519422) | 0.288903 / 0.579283 (-0.290381) | 0.275417 / 0.434364 (-0.158947) | 0.322628 / 0.540337 (-0.217709) | 0.445060 / 1.386936 (-0.941876) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f234fce40d5ffc96fac5198d8cc89817970d87ee \"CML watermark\")\n",
"notify https://huggingface.co/datasets/chaoyi-wu/PMC-Inline/discussions/1 once it's merged in dataset-viewer"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6743/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6743 | https://github.com/huggingface/datasets/pull/6743 | true |
2,195,134,854 | https://api.github.com/repos/huggingface/datasets/issues/6742/labels{/name} | Reported in https://github.com/huggingface/datasets-server/issues/2607 | 2024-03-19T18:24:39Z | 6,742 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-19T14:29:25Z | https://api.github.com/repos/huggingface/datasets/issues/6742/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6742/timeline | Fix missing download_config in get_data_patterns | https://api.github.com/repos/huggingface/datasets/issues/6742/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2024-03-19T18:15:13Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6742.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6742",
"merged_at": "2024-03-19T18:15:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6742.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6742"
} | PR_kwDODunzps5qGSfG | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6742). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005394 / 0.011353 (-0.005959) | 0.003780 / 0.011008 (-0.007228) | 0.063459 / 0.038508 (0.024951) | 0.028883 / 0.023109 (0.005774) | 0.239159 / 0.275898 (-0.036739) | 0.258123 / 0.323480 (-0.065357) | 0.003134 / 0.007986 (-0.004851) | 0.003452 / 0.004328 (-0.000876) | 0.049255 / 0.004250 (0.045005) | 0.042727 / 0.037052 (0.005675) | 0.257387 / 0.258489 (-0.001102) | 0.280762 / 0.293841 (-0.013079) | 0.027921 / 0.128546 (-0.100625) | 0.010867 / 0.075646 (-0.064779) | 0.207878 / 0.419271 (-0.211393) | 0.036003 / 0.043533 (-0.007530) | 0.247457 / 0.255139 (-0.007682) | 0.260231 / 0.283200 (-0.022969) | 0.019741 / 0.141683 (-0.121942) | 1.143645 / 1.452155 (-0.308510) | 1.188789 / 1.492716 (-0.303927) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092065 / 0.018006 (0.074059) | 0.286021 / 0.000490 (0.285531) | 0.000220 / 0.000200 (0.000020) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018934 / 0.037411 (-0.018477) | 0.062474 / 0.014526 (0.047949) | 0.073384 / 0.176557 (-0.103172) | 0.121276 / 0.737135 (-0.615860) | 0.077792 / 0.296338 (-0.218546) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285352 / 0.215209 (0.070143) | 2.783110 / 2.077655 (0.705456) | 1.487983 / 1.504120 (-0.016137) | 1.364264 / 1.541195 (-0.176930) | 1.388757 / 1.468490 (-0.079733) | 0.568347 / 4.584777 (-4.016430) | 2.402451 / 3.745712 (-1.343261) | 2.835577 / 5.269862 (-2.434285) | 1.754853 / 4.565676 (-2.810824) | 0.063355 / 0.424275 (-0.360920) | 0.005010 / 0.007607 (-0.002598) | 0.332061 / 0.226044 (0.106016) | 3.287121 / 2.268929 (1.018193) | 1.829520 / 55.444624 (-53.615104) | 1.542669 / 6.876477 (-5.333808) | 1.560679 / 2.142072 (-0.581393) | 0.642371 / 4.805227 (-4.162856) | 0.118636 / 6.500664 (-6.382028) | 0.042262 / 0.075469 (-0.033207) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.984803 / 1.841788 (-0.856985) | 11.578044 / 8.074308 (3.503735) | 9.383428 / 10.191392 (-0.807964) | 0.141367 / 0.680424 (-0.539057) | 0.014047 / 0.534201 (-0.520154) | 0.291505 / 0.579283 (-0.287778) | 0.270199 / 0.434364 (-0.164165) | 0.329874 / 0.540337 (-0.210463) | 0.429386 / 1.386936 (-0.957550) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005322 / 0.011353 (-0.006031) | 0.004023 / 0.011008 (-0.006986) | 0.050126 / 0.038508 (0.011618) | 0.029937 / 0.023109 (0.006828) | 0.275985 / 0.275898 (0.000087) | 0.297965 / 0.323480 (-0.025515) | 0.004429 / 0.007986 (-0.003557) | 0.002729 / 0.004328 (-0.001599) | 0.048995 / 0.004250 (0.044744) | 0.044940 / 0.037052 (0.007888) | 0.288397 / 0.258489 (0.029908) | 0.317716 / 0.293841 (0.023875) | 0.029705 / 0.128546 (-0.098841) | 0.010972 / 0.075646 (-0.064674) | 0.058592 / 0.419271 (-0.360680) | 0.054640 / 0.043533 (0.011108) | 0.276456 / 0.255139 (0.021317) | 0.295119 / 0.283200 (0.011919) | 0.020032 / 0.141683 (-0.121651) | 1.175740 / 1.452155 (-0.276415) | 1.227246 / 1.492716 (-0.265471) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092204 / 0.018006 (0.074197) | 0.300344 / 0.000490 (0.299855) | 0.000213 / 0.000200 (0.000013) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021540 / 0.037411 (-0.015871) | 0.076252 / 0.014526 (0.061726) | 0.087582 / 0.176557 (-0.088975) | 0.125977 / 0.737135 (-0.611159) | 0.090649 / 0.296338 (-0.205689) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294544 / 0.215209 (0.079335) | 2.883736 / 2.077655 (0.806082) | 1.570932 / 1.504120 (0.066812) | 1.449082 / 1.541195 (-0.092113) | 1.463262 / 1.468490 (-0.005228) | 0.559625 / 4.584777 (-4.025152) | 2.448593 / 3.745712 (-1.297119) | 2.663857 / 5.269862 (-2.606005) | 1.757812 / 4.565676 (-2.807865) | 0.061999 / 0.424275 (-0.362276) | 0.005100 / 0.007607 (-0.002507) | 0.343620 / 0.226044 (0.117575) | 3.487059 / 2.268929 (1.218130) | 1.963078 / 55.444624 (-53.481546) | 1.661758 / 6.876477 (-5.214719) | 1.799130 / 2.142072 (-0.342942) | 0.650194 / 4.805227 (-4.155034) | 0.117375 / 6.500664 (-6.383289) | 0.040957 / 0.075469 (-0.034512) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.037882 / 1.841788 (-0.803906) | 12.239784 / 8.074308 (4.165476) | 10.478186 / 10.191392 (0.286794) | 0.164446 / 0.680424 (-0.515978) | 0.014901 / 0.534201 (-0.519300) | 0.302485 / 0.579283 (-0.276798) | 0.283994 / 0.434364 (-0.150370) | 0.338473 / 0.540337 (-0.201864) | 0.468901 / 1.386936 (-0.918035) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5fa934e275d240d9b1228b2f598bc96390299339 \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6742/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6742 | https://github.com/huggingface/datasets/pull/6742 | true |
2,194,626,108 | https://api.github.com/repos/huggingface/datasets/issues/6741/labels{/name} | Reported in https://github.com/huggingface/datasets/issues/4760
The cache was not able to reload a dataset with a single config form the cache if the config name is not specificed
For example
```python
from datasets import load_dataset, config
config.HF_DATASETS_OFFLINE = True
load_dataset("openai_humaneval")
```
This was due to a regression in https://github.com/huggingface/datasets/pull/6632 | 2024-03-25T16:35:21Z | 6,741 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2024-03-19T10:48:32Z | https://api.github.com/repos/huggingface/datasets/issues/6741/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6741/timeline | Fix offline mode with single config | https://api.github.com/repos/huggingface/datasets/issues/6741/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2024-03-25T16:23:59Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6741.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6741",
"merged_at": "2024-03-25T16:23:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6741.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6741"
} | PR_kwDODunzps5qEiu3 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6741). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005093 / 0.011353 (-0.006260) | 0.003317 / 0.011008 (-0.007692) | 0.064795 / 0.038508 (0.026287) | 0.030373 / 0.023109 (0.007263) | 0.258776 / 0.275898 (-0.017122) | 0.269768 / 0.323480 (-0.053711) | 0.004186 / 0.007986 (-0.003799) | 0.002630 / 0.004328 (-0.001699) | 0.048643 / 0.004250 (0.044392) | 0.044220 / 0.037052 (0.007168) | 0.265113 / 0.258489 (0.006624) | 0.292202 / 0.293841 (-0.001639) | 0.027468 / 0.128546 (-0.101079) | 0.010123 / 0.075646 (-0.065523) | 0.226869 / 0.419271 (-0.192402) | 0.035739 / 0.043533 (-0.007794) | 0.253193 / 0.255139 (-0.001946) | 0.271002 / 0.283200 (-0.012198) | 0.017201 / 0.141683 (-0.124482) | 1.105836 / 1.452155 (-0.346318) | 1.161559 / 1.492716 (-0.331158) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090481 / 0.018006 (0.072475) | 0.299013 / 0.000490 (0.298524) | 0.000220 / 0.000200 (0.000020) | 0.000047 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017684 / 0.037411 (-0.019727) | 0.061580 / 0.014526 (0.047054) | 0.074370 / 0.176557 (-0.102186) | 0.119468 / 0.737135 (-0.617667) | 0.074671 / 0.296338 (-0.221668) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284778 / 0.215209 (0.069569) | 2.780241 / 2.077655 (0.702586) | 1.504025 / 1.504120 (-0.000095) | 1.386644 / 1.541195 (-0.154550) | 1.402038 / 1.468490 (-0.066452) | 0.555180 / 4.584777 (-4.029597) | 2.410973 / 3.745712 (-1.334740) | 2.773252 / 5.269862 (-2.496610) | 1.722784 / 4.565676 (-2.842892) | 0.062773 / 0.424275 (-0.361502) | 0.004959 / 0.007607 (-0.002648) | 0.337163 / 0.226044 (0.111119) | 3.356947 / 2.268929 (1.088019) | 1.880953 / 55.444624 (-53.563671) | 1.556049 / 6.876477 (-5.320427) | 1.578589 / 2.142072 (-0.563483) | 0.641993 / 4.805227 (-4.163234) | 0.118624 / 6.500664 (-6.382040) | 0.042202 / 0.075469 (-0.033268) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.995321 / 1.841788 (-0.846467) | 12.257597 / 8.074308 (4.183289) | 9.646214 / 10.191392 (-0.545178) | 0.131124 / 0.680424 (-0.549300) | 0.014119 / 0.534201 (-0.520082) | 0.287597 / 0.579283 (-0.291686) | 0.266983 / 0.434364 (-0.167381) | 0.328165 / 0.540337 (-0.212173) | 0.422405 / 1.386936 (-0.964531) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005091 / 0.011353 (-0.006262) | 0.003358 / 0.011008 (-0.007650) | 0.049136 / 0.038508 (0.010628) | 0.031075 / 0.023109 (0.007966) | 0.275047 / 0.275898 (-0.000851) | 0.296845 / 0.323480 (-0.026635) | 0.004949 / 0.007986 (-0.003037) | 0.002586 / 0.004328 (-0.001743) | 0.048164 / 0.004250 (0.043913) | 0.040754 / 0.037052 (0.003702) | 0.288715 / 0.258489 (0.030226) | 0.312383 / 0.293841 (0.018542) | 0.029372 / 0.128546 (-0.099174) | 0.010097 / 0.075646 (-0.065549) | 0.056752 / 0.419271 (-0.362520) | 0.033128 / 0.043533 (-0.010405) | 0.274986 / 0.255139 (0.019847) | 0.292692 / 0.283200 (0.009493) | 0.018309 / 0.141683 (-0.123374) | 1.190320 / 1.452155 (-0.261834) | 1.222529 / 1.492716 (-0.270188) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091717 / 0.018006 (0.073711) | 0.300278 / 0.000490 (0.299788) | 0.000217 / 0.000200 (0.000017) | 0.000065 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021394 / 0.037411 (-0.016018) | 0.074918 / 0.014526 (0.060392) | 0.087461 / 0.176557 (-0.089095) | 0.125499 / 0.737135 (-0.611636) | 0.087484 / 0.296338 (-0.208854) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296557 / 0.215209 (0.081348) | 2.905527 / 2.077655 (0.827872) | 1.624640 / 1.504120 (0.120520) | 1.505495 / 1.541195 (-0.035700) | 1.514066 / 1.468490 (0.045576) | 0.569376 / 4.584777 (-4.015401) | 2.448575 / 3.745712 (-1.297137) | 2.772805 / 5.269862 (-2.497057) | 1.757287 / 4.565676 (-2.808390) | 0.064209 / 0.424275 (-0.360066) | 0.005688 / 0.007607 (-0.001919) | 0.353175 / 0.226044 (0.127131) | 3.481591 / 2.268929 (1.212662) | 1.995384 / 55.444624 (-53.449240) | 1.684623 / 6.876477 (-5.191854) | 1.675750 / 2.142072 (-0.466323) | 0.644463 / 4.805227 (-4.160764) | 0.115393 / 6.500664 (-6.385271) | 0.040671 / 0.075469 (-0.034799) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.037487 / 1.841788 (-0.804301) | 11.902194 / 8.074308 (3.827886) | 10.148579 / 10.191392 (-0.042813) | 0.150261 / 0.680424 (-0.530163) | 0.015001 / 0.534201 (-0.519200) | 0.291008 / 0.579283 (-0.288275) | 0.278758 / 0.434364 (-0.155606) | 0.334037 / 0.540337 (-0.206301) | 0.419942 / 1.386936 (-0.966994) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#dcd01046388fc052d37acc5a450bea69e3c57afc \"CML watermark\")\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6741/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6741 | https://github.com/huggingface/datasets/pull/6741 | true |