README / README.md
hackelle's picture
Update README with citation info
732d001 verified
|
raw
history blame
8.65 kB
metadata
title: README
emoji: 🌍
colorFrom: blue
colorTo: green
sdk: static
pinned: false
license: mit
short_description: Repository of Pretrained Model Weights on BigEarthNet v2.0

BigEarthNet v2.0 Pretrained Model Weights

We provide weights for several different pretrained models. The model weights for the best-performing model, based on the macro average precision score on the recommended test split, have been uploaded. All models have been trained using: i) BigEarthNet-S1 data only (S1), ii) BigEarthNet-S2 data only (S2), or iii) both BigEarthNet-S1 and -S2 (S1+S2) together.

The following bands were used to train the models:

  • For models using BigEarthNet-S1 only: Sentinel-1 bands ["VH", "VV"]
  • For models using BigEarthNet-S2 only: Sentinel-2 10m bands and 20m bands ["B02", "B03", "B04", "B08", "B05", "B06", "B07", "B11", "B12", "B8A"]
  • For models using BigEarthNet-S1 and -S2: Sentinel-2 10m bands and 20m bands and Sentinel-1 bands = ["B02", "B03", "B04", "B08", "B05", "B06", "B07", "B11", "B12", "B8A", "VH", "VV"]

The multi-hot encoded output of the model indicates the predicted multi-label output. The multi-hot encoded output relates to the following class labels sorted in alphabetical order:
['Agro-forestry areas', 'Arable land', 'Beaches, dunes, sands', 'Broad-leaved forest', 'Coastal wetlands', 'Complex cultivation patterns', 'Coniferous forest', 'Industrial or commercial units', 'Inland waters', 'Inland wetlands', 'Land principally occupied by agriculture, with significant areas of natural vegetation', 'Marine waters', 'Mixed forest', 'Moors, heathland and sclerophyllous vegetation', 'Natural grassland and sparsely vegetated areas', 'Pastures', 'Permanent crops', 'Transitional woodland, shrub', 'Urban fabric']

[BigEarthNet](http://bigearth.net/)

Links

Model Equivalent timm model name S1 only S2 only S1+S2
ConvMixer-768/32 convmixer_768_32 ConvMixer-768/32 S1 ConvMixer-768/32 S2 ConvMixer-768/32 S1+S2
ConvNext v2 Base convnextv2_base ConvNext v2 Base S1 ConvNext v2 Base S2 ConvNext v2 Base S1+S2
MLP-Mixer Base mixer_b16_224 MLP-Mixer Base S1 MLP-Mixer Base S2 MLP-Mixer Base S1+S2
MobileViT-S mobilevit_s MobileViT-S S1 MobileViT-S S2 MobileViT-S S1+S2
ResNet-50 resnet50 ResNet-50 S1 ResNet-50 S2 ResNet-50 S1+S2
ResNet-101 resnet101 ResNet-101 S1 ResNet-101 S2 ResNet-101 S1+S2
ViT Base vit_base_patch8_224 ViT Base S1 ViT Base S2 ViT Base S1+S2

[BigEarthNet](http://bigearth.net/)

Usage

To use the model, download the codes that define the model architecture from the official BigEarthNet v2.0 (reBEN) repository and load the model with the corresponding weights using the code below. Note that configilm is a requirement to use the code below.

from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier

model = BigEarthNetv2_0_ImageClassifier.from_pretrained(
  "path_to/huggingface_model_folder"
)

e.g.

from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier

model = BigEarthNetv2_0_ImageClassifier.from_pretrained(
  "BIFOLD-BigEarthNetv2-0/resnet50-s2-v0.1.1"
)

If you use any of these models in your research, please cite the following papers:

@article{clasen2024refinedbigearthnet,
  title={reBEN: Refined BigEarthNet Dataset for Remote Sensing Image Analysis}, 
  author={Clasen, Kai Norman and Hackel, Leonard and Burgert, Tom and Sumbul, Gencer and Demir, Beg{\"u}m and Markl, Volker},
  year={2024},
  eprint={2407.03653},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2407.03653}, 
}
@article{hackel2024configilm,
  title={ConfigILM: A general purpose configurable library for combining image and language models for visual question answering},
  author={Hackel, Leonard and Clasen, Kai Norman and Demir, Beg{\"u}m},
  journal={SoftwareX},
  volume={26},
  pages={101731},
  year={2024},
  publisher={Elsevier}
}