Update README.md
Browse files
README.md
CHANGED
@@ -56,4 +56,54 @@ configs:
|
|
56 |
path: data/train-*
|
57 |
- split: val
|
58 |
path: data/val-*
|
|
|
|
|
|
|
|
|
|
|
|
|
59 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
56 |
path: data/train-*
|
57 |
- split: val
|
58 |
path: data/val-*
|
59 |
+
task_categories:
|
60 |
+
- depth-estimation
|
61 |
+
- image-segmentation
|
62 |
+
- image-feature-extraction
|
63 |
+
size_categories:
|
64 |
+
- 1K<n<10K
|
65 |
---
|
66 |
+
|
67 |
+
|
68 |
+
This is the NYUv2 dataset for scene understanding tasks.
|
69 |
+
I downloaded the original data from the [Tsinghua Cloud](https://cloud.tsinghua.edu.cn/f/6d0a89f4ca1347d8af5f/?dl=1) and transformed it into Huggingface Dataset.
|
70 |
+
Credit to [ForkMerge: Mitigating Negative Transfer in Auxiliary-Task Learning](http://arxiv.org/abs/2301.12618).
|
71 |
+
|
72 |
+
## Dataset Information
|
73 |
+
|
74 |
+
This data contains two splits: 'train' and 'val' (used as test dataset).
|
75 |
+
Each sample in the dataset has 5 items: 'image', 'segmentation', 'depth', 'normal', and 'noise'.
|
76 |
+
The noise is generated using `torch.rand()`.
|
77 |
+
|
78 |
+
|
79 |
+
## Usage
|
80 |
+
|
81 |
+
```python
|
82 |
+
dataset = load_dataset('tanganke/nyuv2')
|
83 |
+
dataset = dataset.with_format('torch') # this will convert the items into `torch.Tensor` objects
|
84 |
+
```
|
85 |
+
|
86 |
+
this will return a `DatasetDict`:
|
87 |
+
|
88 |
+
```python
|
89 |
+
DatasetDict({
|
90 |
+
train: Dataset({
|
91 |
+
features: ['image', 'segmentation', 'depth', 'normal', 'noise'],
|
92 |
+
num_rows: 795
|
93 |
+
})
|
94 |
+
val: Dataset({
|
95 |
+
features: ['image', 'segmentation', 'depth', 'normal', 'noise'],
|
96 |
+
num_rows: 654
|
97 |
+
})
|
98 |
+
})
|
99 |
+
```
|
100 |
+
|
101 |
+
The features:
|
102 |
+
|
103 |
+
```python
|
104 |
+
{'image': Array3D(shape=(3, 288, 384), dtype='float32', id=None),
|
105 |
+
'segmentation': Array2D(shape=(288, 384), dtype='int64', id=None),
|
106 |
+
'depth': Array3D(shape=(1, 288, 384), dtype='float32', id=None),
|
107 |
+
'normal': Array3D(shape=(3, 288, 384), dtype='float32', id=None),
|
108 |
+
'noise': Array3D(shape=(1, 288, 384), dtype='float32', id=None)}
|
109 |
+
```
|