Update README.md
Browse files
README.md
CHANGED
@@ -14,11 +14,7 @@ tags:
|
|
14 |
|
15 |
## Dataset Description
|
16 |
|
17 |
-
- **Homepage:**
|
18 |
-
- **Repository:**
|
19 |
- **Paper:**
|
20 |
-
- **Leaderboard:**
|
21 |
-
- **Point of Contact:**
|
22 |
|
23 |
### Dataset Summary
|
24 |
|
@@ -35,16 +31,19 @@ We opted to use HDF5 to grant better portability and lower file size than GeoTIF
|
|
35 |
|
36 |
### Dataset opening
|
37 |
|
38 |
-
|
39 |
|
40 |
### Data Instances
|
41 |
|
42 |
Each matrix has a shape of 5490x5490xC, where C is 12 for pre-fire and post-fire images, while it is 0 for binary masks.
|
43 |
-
Pre-patched version with matrices of size 512x512xC
|
|
|
|
|
|
|
44 |
|
45 |
### Data Fields
|
46 |
|
47 |
-
In each standard HDF5 file, you can find post-fire, pre-fire images and binary masks. The file is structured in this way:
|
48 |
|
49 |
```bash
|
50 |
βββ foldn
|
@@ -67,7 +66,7 @@ In each standard HDF5 file, you can find post-fire, pre-fire images and binary m
|
|
67 |
...
|
68 |
```
|
69 |
|
70 |
-
where `foldn` and `foldm` are fold names and `uidn` is a unique identifier for the
|
71 |
|
72 |
For the pre-patched version, the structure is:
|
73 |
```bash
|
@@ -81,62 +80,31 @@ root
|
|
81 |
|
|
82 |
...
|
83 |
```
|
84 |
-
the fold name is stored as attribute.
|
85 |
|
86 |
### Data Splits
|
87 |
|
88 |
-
There are 5 random splits whose names are: 0, 1, 2, 3 and 4.
|
89 |
-
|
90 |
-
## Dataset Creation
|
91 |
-
|
92 |
-
### Curation Rationale
|
93 |
-
|
94 |
-
[More Information Needed]
|
95 |
|
96 |
### Source Data
|
97 |
|
98 |
-
#### Initial Data Collection and Normalization
|
99 |
-
|
100 |
Data are collected directly from Copernicus Open Access Hub through the API. The band files are aggregated into one single matrix.
|
101 |
|
102 |
-
### Annotations
|
103 |
-
|
104 |
-
#### Annotation process
|
105 |
-
|
106 |
-
[More Information Needed]
|
107 |
-
|
108 |
-
#### Who are the annotators?
|
109 |
-
|
110 |
-
[More Information Needed]
|
111 |
-
|
112 |
-
## Considerations for Using the Data
|
113 |
-
|
114 |
-
### Social Impact of Dataset
|
115 |
-
|
116 |
-
[More Information Needed]
|
117 |
-
|
118 |
-
### Discussion of Biases
|
119 |
-
|
120 |
-
[More Information Needed]
|
121 |
-
|
122 |
-
### Other Known Limitations
|
123 |
-
|
124 |
-
[More Information Needed]
|
125 |
-
|
126 |
## Additional Information
|
127 |
|
128 |
-
### Dataset Curators
|
129 |
-
|
130 |
-
[More Information Needed]
|
131 |
-
|
132 |
### Licensing Information
|
133 |
|
134 |
This work is under OpenRAIL license.
|
135 |
|
136 |
### Citation Information
|
137 |
|
138 |
-
If you plan to use this dataset in your work please
|
139 |
-
|
140 |
-
|
141 |
-
|
142 |
-
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|
15 |
## Dataset Description
|
16 |
|
|
|
|
|
17 |
- **Paper:**
|
|
|
|
|
18 |
|
19 |
### Dataset Summary
|
20 |
|
|
|
31 |
|
32 |
### Dataset opening
|
33 |
|
34 |
+
The dataset was compressed using `h5py` and BZip2 from `hdf5plugin`. **WARNING: `hdf5plugin` is necessary to extract data**.
|
35 |
|
36 |
### Data Instances
|
37 |
|
38 |
Each matrix has a shape of 5490x5490xC, where C is 12 for pre-fire and post-fire images, while it is 0 for binary masks.
|
39 |
+
Pre-patched version is provided with matrices of size 512x512xC, too. In this case, only mask with at least one positive pixel is present.
|
40 |
+
|
41 |
+
You can find two versions of the dataset: _raw_ (without any transformation) and _normalized_ (with data normalized in the range 0-255).
|
42 |
+
Our suggestion is to use the _raw_ version to have the possibility to apply any wanted pre-processing step.
|
43 |
|
44 |
### Data Fields
|
45 |
|
46 |
+
In each standard HDF5 file, you can find post-fire, pre-fire images, and binary masks. The file is structured in this way:
|
47 |
|
48 |
```bash
|
49 |
βββ foldn
|
|
|
66 |
...
|
67 |
```
|
68 |
|
69 |
+
where `foldn` and `foldm` are fold names and `uidn` is a unique identifier for the wildfire.
|
70 |
|
71 |
For the pre-patched version, the structure is:
|
72 |
```bash
|
|
|
80 |
|
|
81 |
...
|
82 |
```
|
83 |
+
the fold name is stored as an attribute.
|
84 |
|
85 |
### Data Splits
|
86 |
|
87 |
+
There are 5 random splits whose names are: 0, 1, 2, 3, and 4.
|
|
|
|
|
|
|
|
|
|
|
|
|
88 |
|
89 |
### Source Data
|
90 |
|
|
|
|
|
91 |
Data are collected directly from Copernicus Open Access Hub through the API. The band files are aggregated into one single matrix.
|
92 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
93 |
## Additional Information
|
94 |
|
|
|
|
|
|
|
|
|
95 |
### Licensing Information
|
96 |
|
97 |
This work is under OpenRAIL license.
|
98 |
|
99 |
### Citation Information
|
100 |
|
101 |
+
If you plan to use this dataset in your work please give the credit to Sentinel-2 mission and the California Department of Forestry and Fire Protection and cite using this BibTex:
|
102 |
+
```
|
103 |
+
@article{
|
104 |
+
title={CaBuAr: California Burned Areas dataset for delineation},
|
105 |
+
author={Rege Cambrin, Daniele; Colomba, Luca; Garza, Paolo},
|
106 |
+
journal={IEEE Geoscience and Remote Sensing Magazine},
|
107 |
+
doi={10.1109/MGRS.2023.3292467},
|
108 |
+
year={2023}
|
109 |
+
}
|
110 |
+
```
|