File size: 3,424 Bytes
3efa0f2 d367b13 af0ac84 7a1bd38 af0ac84 7a1bd38 699ad5e 7a1bd38 699ad5e 7a1bd38 f0b06b5 dda1d30 cc5d959 ae9680e cc5d959 ae9680e 99c23e0 dda1d30 7a1bd38 dda1d30 99c23e0 7a1bd38 99c23e0 dda1d30 99c23e0 7a1bd38 f0b06b5 99c23e0 f0b06b5 7a1bd38 99c23e0 7a1bd38 699ad5e 7a1bd38 ad57adf 7a1bd38 99c23e0 4e78b28 99c23e0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 |
---
license: openrail
task_categories:
- image-segmentation
pretty_name: California Burned Areas
size_categories:
- n<1K
tags:
- climate
---
# California Burned Areas Dataset
**Working on adding more data**
## Dataset Description
- **Paper:**
### Dataset Summary
This dataset contains images from Sentinel-2 satellites taken before and after a wildfire.
The ground truth masks are provided by the California Department of Forestry and Fire Protection and they are mapped on the images.
### Supported Tasks
The dataset is designed to do binary semantic segmentation of burned vs unburned areas.
## Dataset Structure
We opted to use HDF5 to grant better portability and lower file size than GeoTIFF.
### Dataset opening
Using the dataset library, you download only the pre-patched raw version for simplicity.
```python
from dataset import load_dataset
# There are two available configurations, "post-fire" and "pre-post-fire."
dataset = load_dataset("DarthReca/california_burned_areas", name="post-fire")
```
The dataset was compressed using `h5py` and BZip2 from `hdf5plugin`. **WARNING: `hdf5plugin` is necessary to extract data**.
### Data Instances
Each matrix has a shape of 5490x5490xC, where C is 12 for pre-fire and post-fire images, while it is 0 for binary masks.
Pre-patched version is provided with matrices of size 512x512xC, too. In this case, only mask with at least one positive pixel is present.
You can find two versions of the dataset: _raw_ (without any transformation) and _normalized_ (with data normalized in the range 0-255).
Our suggestion is to use the _raw_ version to have the possibility to apply any wanted pre-processing step.
### Data Fields
In each standard HDF5 file, you can find post-fire, pre-fire images, and binary masks. The file is structured in this way:
```bash
βββ foldn
β βββ uid0
β β βββ pre_fire
β β βββ post_fire
β β βββ mask
β βββ uid1
β βββ post_fire
β βββ mask
β
βββ foldm
βββ uid2
β βββ post_fire
β βββ mask
βββ uid3
βββ pre_fire
βββ post_fire
βββ mask
...
```
where `foldn` and `foldm` are fold names and `uidn` is a unique identifier for the wildfire.
For the pre-patched version, the structure is:
```bash
root
|
|-- uid0_x: {post_fire, pre_fire, mask}
|
|-- uid0_y: {post_fire, pre_fire, mask}
|
|-- uid1_x: {post_fire, mask}
|
...
```
the fold name is stored as an attribute.
### Data Splits
There are 5 random splits whose names are: 0, 1, 2, 3, and 4.
### Source Data
Data are collected directly from Copernicus Open Access Hub through the API. The band files are aggregated into one single matrix.
## Additional Information
### Licensing Information
This work is under OpenRAIL license.
### Citation Information
If you plan to use this dataset in your work please give the credit to Sentinel-2 mission and the California Department of Forestry and Fire Protection and cite using this BibTex:
```
@ARTICLE{cabuar,
author={Cambrin, Daniele Rege and Colomba, Luca and Garza, Paolo},
journal={IEEE Geoscience and Remote Sensing Magazine},
title={CaBuAr: California burned areas dataset for delineation [Software and Data Sets]},
year={2023},
volume={11},
number={3},
pages={106-113},
doi={10.1109/MGRS.2023.3292467}
}
``` |