Merge branch 'main' of https://huggingface.co/datasets/DarthReca/california_burned_areas
Browse files
README.md
CHANGED
@@ -14,11 +14,7 @@ tags:
|
|
14 |
|
15 |
## Dataset Description
|
16 |
|
17 |
-
- **Homepage:**
|
18 |
-
- **Repository:**
|
19 |
- **Paper:**
|
20 |
-
- **Leaderboard:**
|
21 |
-
- **Point of Contact:**
|
22 |
|
23 |
### Dataset Summary
|
24 |
|
@@ -31,17 +27,23 @@ The dataset is designed to do binary semantic segmentation of burned vs unburned
|
|
31 |
|
32 |
## Dataset Structure
|
33 |
|
|
|
|
|
34 |
### Dataset opening
|
35 |
|
36 |
-
|
37 |
|
38 |
### Data Instances
|
39 |
|
40 |
Each matrix has a shape of 5490x5490xC, where C is 12 for pre-fire and post-fire images, while it is 0 for binary masks.
|
|
|
|
|
|
|
|
|
41 |
|
42 |
### Data Fields
|
43 |
|
44 |
-
In each HDF5 file, you can find post-fire, pre-fire images and binary masks. The file is structured in this way:
|
45 |
|
46 |
```bash
|
47 |
βββ foldn
|
@@ -64,62 +66,45 @@ In each HDF5 file, you can find post-fire, pre-fire images and binary masks. The
|
|
64 |
...
|
65 |
```
|
66 |
|
67 |
-
where `foldn` and `foldm` are fold names and `uidn` is a unique identifier for the
|
68 |
-
|
69 |
-
### Data Splits
|
70 |
|
71 |
-
|
72 |
-
|
73 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
74 |
|
75 |
-
###
|
76 |
|
77 |
-
|
78 |
|
79 |
### Source Data
|
80 |
|
81 |
-
#### Initial Data Collection and Normalization
|
82 |
-
|
83 |
Data are collected directly from Copernicus Open Access Hub through the API. The band files are aggregated into one single matrix.
|
84 |
|
85 |
-
### Annotations
|
86 |
-
|
87 |
-
#### Annotation process
|
88 |
-
|
89 |
-
[More Information Needed]
|
90 |
-
|
91 |
-
#### Who are the annotators?
|
92 |
-
|
93 |
-
[More Information Needed]
|
94 |
-
|
95 |
-
## Considerations for Using the Data
|
96 |
-
|
97 |
-
### Social Impact of Dataset
|
98 |
-
|
99 |
-
[More Information Needed]
|
100 |
-
|
101 |
-
### Discussion of Biases
|
102 |
-
|
103 |
-
[More Information Needed]
|
104 |
-
|
105 |
-
### Other Known Limitations
|
106 |
-
|
107 |
-
[More Information Needed]
|
108 |
-
|
109 |
## Additional Information
|
110 |
|
111 |
-
### Dataset Curators
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
### Licensing Information
|
116 |
|
117 |
This work is under OpenRAIL license.
|
118 |
|
119 |
### Citation Information
|
120 |
|
121 |
-
If you plan to use this dataset in your work please
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|
15 |
## Dataset Description
|
16 |
|
|
|
|
|
17 |
- **Paper:**
|
|
|
|
|
18 |
|
19 |
### Dataset Summary
|
20 |
|
|
|
27 |
|
28 |
## Dataset Structure
|
29 |
|
30 |
+
We opted to use HDF5 to grant better portability and lower file size than GeoTIFF.
|
31 |
+
|
32 |
### Dataset opening
|
33 |
|
34 |
+
The dataset was compressed using `h5py` and BZip2 from `hdf5plugin`. **WARNING: `hdf5plugin` is necessary to extract data**.
|
35 |
|
36 |
### Data Instances
|
37 |
|
38 |
Each matrix has a shape of 5490x5490xC, where C is 12 for pre-fire and post-fire images, while it is 0 for binary masks.
|
39 |
+
Pre-patched version is provided with matrices of size 512x512xC, too. In this case, only mask with at least one positive pixel is present.
|
40 |
+
|
41 |
+
You can find two versions of the dataset: _raw_ (without any transformation) and _normalized_ (with data normalized in the range 0-255).
|
42 |
+
Our suggestion is to use the _raw_ version to have the possibility to apply any wanted pre-processing step.
|
43 |
|
44 |
### Data Fields
|
45 |
|
46 |
+
In each standard HDF5 file, you can find post-fire, pre-fire images, and binary masks. The file is structured in this way:
|
47 |
|
48 |
```bash
|
49 |
βββ foldn
|
|
|
66 |
...
|
67 |
```
|
68 |
|
69 |
+
where `foldn` and `foldm` are fold names and `uidn` is a unique identifier for the wildfire.
|
|
|
|
|
70 |
|
71 |
+
For the pre-patched version, the structure is:
|
72 |
+
```bash
|
73 |
+
root
|
74 |
+
|
|
75 |
+
|-- uid0_x: {post_fire, pre_fire, mask}
|
76 |
+
|
|
77 |
+
|-- uid0_y: {post_fire, pre_fire, mask}
|
78 |
+
|
|
79 |
+
|-- uid1_x: {post_fire, mask}
|
80 |
+
|
|
81 |
+
...
|
82 |
+
```
|
83 |
+
the fold name is stored as an attribute.
|
84 |
|
85 |
+
### Data Splits
|
86 |
|
87 |
+
There are 5 random splits whose names are: 0, 1, 2, 3, and 4.
|
88 |
|
89 |
### Source Data
|
90 |
|
|
|
|
|
91 |
Data are collected directly from Copernicus Open Access Hub through the API. The band files are aggregated into one single matrix.
|
92 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
93 |
## Additional Information
|
94 |
|
|
|
|
|
|
|
|
|
95 |
### Licensing Information
|
96 |
|
97 |
This work is under OpenRAIL license.
|
98 |
|
99 |
### Citation Information
|
100 |
|
101 |
+
If you plan to use this dataset in your work please give the credit to Sentinel-2 mission and the California Department of Forestry and Fire Protection and cite using this BibTex:
|
102 |
+
```
|
103 |
+
@article{cabuar,
|
104 |
+
title={Ca{B}u{A}r: California {B}urned {A}reas dataset for delineation},
|
105 |
+
author={Rege Cambrin, Daniele and Colomba, Luca and Garza, Paolo},
|
106 |
+
journal={IEEE Geoscience and Remote Sensing Magazine},
|
107 |
+
doi={10.1109/MGRS.2023.3292467},
|
108 |
+
year={2023}
|
109 |
+
}
|
110 |
+
```
|