File size: 4,035 Bytes
a16f417 c86bf2d 4011e11 a16f417 c86bf2d 4011e11 c86bf2d e9092e9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 |
---
dataset_info:
features:
- name: image
dtype: image
- name: attributes
sequence: int8
length: 40
- name: identity
dtype: int64
- name: bbox
sequence: int32
length: 4
- name: landmarks
sequence: int32
length: 10
splits:
- name: train
num_bytes: 8645556172.75
num_examples: 162770
- name: validation
num_bytes: 142232383.301
num_examples: 19867
- name: test
num_bytes: 141332777.292
num_examples: 19962
download_size: 8917038019
dataset_size: 8929121333.343
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
Porting of the famous [celeba dataset](https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) to 🤗 Datasets.
### Dataset Component Descriptions
#### Attributes (`attr`)
- **Description**: The `attributes` feature consists of binary labels that represent the presence or absence of 40 different facial attributes. Each attribute is encoded as either 0 (absence) or 1 (presence). These attributes cover a wide range of facial characteristics and styles, such as "Smiling", "Wearing Hat", "Eyeglasses", etc.
- **Data Type**: Sequence
- **Length**: `40`
- **Dtype**: `int8`
#### Identity (`identity`)
- **Description**: The `identity` feature represents the label for each individual in the dataset. It is used to identify which images belong to the same person. This allows for tasks such as face recognition and verification, where the goal is to match different images of the same person.
- **Data Type**: `int64`
- **Unique Identifiers**: Each integer value corresponds to a unique individual.
#### Bounding Box (`bbox`)
- **Description**: The `bounding box` feature provides the coordinates for a rectangle that encapsulates the face in each image. This is useful for tasks where the face needs to be isolated or focused upon. The bounding box is defined by four integers: the x and y coordinates of the top-left corner, followed by the width and height of the box.
- **Data Type**: Sequence
- **Length**: `4`
- **Dtype**: `int32`
- **Details**: The format is `[x, y, width, height]`, where `(x, y)` are the coordinates of the top-left corner of the bounding box.
#### Landmarks (`landmarks`)
- **Description**: The `landmarks` feature specifies the coordinates of key facial points, which are crucial for detailed facial analysis and tasks like advanced face manipulation or animation. These landmarks identify the positions of critical facial components such as the eyes, nose, and mouth.
- **Data Type**: Sequence
- **Length**: `10`
- **Dtype**: `int32`
- **Details**: The format is `[lefteye_x, lefteye_y, righteye_x, righteye_y, nose_x, nose_y, leftmouth_x, leftmouth_y, rightmouth_x, rightmouth_y]`, representing the x and y coordinates of each landmark point.
Script used for porting:
```python
import torchvision
from datasets import Features, Dataset, Image as HFImage, ClassLabel, Sequence, Value
import numpy as np
celeba_dataset = torchvision.datasets.CelebA(root="./celeb_a", split="train",
target_type=["attr", "identity", "bbox", "landmarks"], download=False)
def gen():
for img, (attr, identity, bbox, landmarks) in celeba_dataset:
yield {
"image": img,
"attributes": attr.numpy(),
"identity": identity.item(),
"bbox": bbox.numpy(),
"landmarks": landmarks.numpy()
}
features = Features({
'image': HFImage(decode=True, id=None),
'attributes': Sequence(feature=Value("int8"), length=40),
'identity': Value("int64"),
'bbox': Sequence(feature=Value("int32"), length=4),
'landmarks': Sequence(feature=Value("int32"), length=10)
})
# Create a Dataset object from the generator
hf_dataset = Dataset.from_generator(generator=gen, features=features)
# Push the dataset to the Hugging Face Hub
hf_dataset.push_to_hub("eurecom-ds/celeba", split="train")
```
|