File size: 1,245 Bytes
a6a71bb f0dfdf9 a6a71bb f0dfdf9 a6a71bb 1acf5c3 f0dfdf9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 4450242498.020249
num_examples: 287968
- name: test
num_bytes: 234247797.33875093
num_examples: 15157
download_size: 4756942293
dataset_size: 4684490295.359
license: mit
---
# Dataset Card for "lsun-bedrooms"
This is a 20% sample of the bedrooms category in [`LSUN`](https://github.com/fyu/lsun), uploaded as a dataset for convenience.
The license for _this compilation only_ is MIT. The data retains the same license as the original dataset.
This is (roughly) the code that was used to upload this dataset:
```Python
import os
import shutil
from miniai.imports import *
from miniai.diffusion import *
from datasets import load_dataset
path_data = Path('data')
path_data.mkdir(exist_ok=True)
path = path_data/'bedroom'
url = 'https://s3.amazonaws.com/fast-ai-imageclas/bedroom.tgz'
if not path.exists():
path_zip = fc.urlsave(url, path_data)
shutil.unpack_archive('data/bedroom.tgz', 'data')
dataset = load_dataset("imagefolder", data_dir="data/bedroom")
dataset = dataset.remove_columns('label')
dataset = dataset['train'].train_test_split(test_size=0.05)
dataset.push_to_hub("pcuenq/lsun-bedrooms")
```
|