Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,30 +1,19 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
-
|
5 |
-
|
6 |
-
language:
|
7 |
-
- en
|
8 |
tags:
|
9 |
-
- audio-visual
|
10 |
-
- physical
|
11 |
-
-
|
12 |
-
|
13 |
-
|
14 |
-
-
|
15 |
-
|
16 |
-
|
17 |
-
-
|
18 |
-
|
19 |
-
- split: train
|
20 |
-
path: "splits/train.csv"
|
21 |
-
- split: test_I
|
22 |
-
path: "splits/test_I.csv"
|
23 |
-
- split: test_II
|
24 |
-
path: "splits/test_II.csv"
|
25 |
-
- split: test_III
|
26 |
-
path: "splits/test_III.csv"
|
27 |
-
---
|
28 |
|
29 |
|
30 |
<!-- # <img src="./assets/pouring-water-logo5.png" alt="Logo" width="40"> -->
|
@@ -82,6 +71,23 @@ We collect a dataset of 805 clean videos that show the action of pouring water i
|
|
82 |
<img width="650" alt="image" src="./assets/containers-v2.png">
|
83 |
</p>
|
84 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
85 |
The dataset is stored in the following directory structure:
|
86 |
```sh
|
87 |
SoundOfWater/
|
@@ -181,8 +187,6 @@ The splits are as follows:
|
|
181 |
</table>
|
182 |
|
183 |
|
184 |
-
TODO: add test_III.txt file.
|
185 |
-
|
186 |
## 📝 Annotations
|
187 |
|
188 |
An example row with metadata for a video looks like:
|
@@ -272,4 +276,4 @@ We also want to highlight closely related work that could be of interest:
|
|
272 |
|
273 |
## 🙅🏻 Potential Biases
|
274 |
|
275 |
-
The dataset is recorded on a standard mobile phone from the authors themselves. It is recorded in a indoor setting. As far as possible, we have tried to not include any personal information in the videos. Thus, it is unlikely to include harmdul biases. Plus, the scale of the dataset is small and is not likely to be used for training large models.
|
|
|
1 |
+
|
2 |
+
```yaml
|
3 |
+
language:
|
4 |
+
- english
|
5 |
+
pretty_name: "Sound-of-Water 50"
|
|
|
|
|
6 |
tags:
|
7 |
+
- audio-visual learning
|
8 |
+
- physical property estimation
|
9 |
+
- pouring water
|
10 |
+
license: "MIT"
|
11 |
+
task_categories:
|
12 |
+
- physical property estimation
|
13 |
+
- fine-grained audio classification
|
14 |
+
- liquid mass estimation
|
15 |
+
- container shape estimation
|
16 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
18 |
|
19 |
<!-- # <img src="./assets/pouring-water-logo5.png" alt="Logo" width="40"> -->
|
|
|
71 |
<img width="650" alt="image" src="./assets/containers-v2.png">
|
72 |
</p>
|
73 |
|
74 |
+
Download the dataset with:
|
75 |
+
|
76 |
+
```python
|
77 |
+
# Note: this shall take 5-10 mins.
|
78 |
+
|
79 |
+
# Optionally, disable progress bars
|
80 |
+
# os.environ["HF_HUB_DISABLE_PROGRESS_BARS"] = True
|
81 |
+
|
82 |
+
from huggingface_hub import snapshot_download
|
83 |
+
snapshot_download(
|
84 |
+
repo_id="bpiyush/sound-of-water",
|
85 |
+
repo_type="dataset",
|
86 |
+
local_dir="/path/to/dataset/SoundOfWater",
|
87 |
+
)
|
88 |
+
```
|
89 |
+
|
90 |
+
|
91 |
The dataset is stored in the following directory structure:
|
92 |
```sh
|
93 |
SoundOfWater/
|
|
|
187 |
</table>
|
188 |
|
189 |
|
|
|
|
|
190 |
## 📝 Annotations
|
191 |
|
192 |
An example row with metadata for a video looks like:
|
|
|
276 |
|
277 |
## 🙅🏻 Potential Biases
|
278 |
|
279 |
+
The dataset is recorded on a standard mobile phone from the authors themselves. It is recorded in a indoor setting. As far as possible, we have tried to not include any personal information in the videos. Thus, it is unlikely to include harmdul biases. Plus, the scale of the dataset is small and is not likely to be used for training large models.
|