YOLOv8 models for deadwood segmentation from RGB UAV imagery
Model Details
Model Description
- Model type: Instance segmentation
- License: aGPL3
- Finetuned from model: Ultralytics pretrained yolov8-seg models
Model Sources [optional]
- Repository: https://github.com/mayrajeo/yolov8-deadwood
- Paper: Added after submission
- Demo: https://huggingface.co/spaces/mayrajeo/yolov8-deadwood
Uses
Direct Use
Models are meant for detecting and segmenting fallen and standing deadwood from RGB UAV images. As the models are trained on 640x640 pixel orthoimages with around 5 cm spatial resolution, they most likely work best with them.
Models can be directly used with ultralytics
library like model = YOLO(<model_weights.pt>)
, and https://github.com/mayrajeo/yolov8-deadwood contains example scripts on how to use the models with larger orthomosaics.
Out-of-Scope Use
There are some things to keep in mind when using these models:
- Models are trained using imagery from two geographically different locations, but both of the study sites consist of dense boreal forests in Finland.
- The imagery was collected during leaf-on season, so the models will not produce optimal results during other seasons
How to Get Started with the Model
Single 640x640 pixel image chips can be processed with
from ultralytics import YOLO
model = YOLO(<path_to_model>)
res = model(<path_to_image>)
Larger orthomosaics should be processed with sahi
library, or using the predict_image.py
script from the related GitHub repository.
Training Details
Training Data
The models were trained on manually annotated deadwood polygon data. From Hiidenportti study area, 33 rectangular scenes were extracted and all visible deadwood was annotated from them. Same process was done fror Sudenpesänkangas, where 71 100x100 meter scenes were extracted.
In total, the dataset contained 13,813 deadwood instances, of which 2,502 were standing deadwood canopies and 11,311 were fallen deadwood trunks. Hiidenportti dataset contained 1,083 standing and 7,396 fallen annotations, whereas Sudenpesänkangas contained 1,419 standing and 3,915 fallen annotations.
As using the full sized scenes for training the models would be unfeasible due to their large sizes, the images were split into 640x640 pixel image chips without overlap, and the polygon annotations were converted to YOLO annotation format. After this process, the HP dataset contained 632 image chips for training, 142 for validating and 211 for testing, and SPK dataset contained 688, 224 and 224 chips for training, validating and testing respectively.
There are three types of models: models with _hp
suffix are trained only on Hiidenportti data, models with _spk
suffix only on Sudenpesänkangas data and models with _both
suffix are trained on data from both sites.
Training Procedure
All models were trained on a single V100 GPGPU with 32GB of RAM on Puhti supercomputer hosted by CSC -- IT Center for Science, Finland.
Each model was trained for a maximum of 30 epochs with early stopping tolerance of 50 epohcs using Adam optimizer with initial learning rate of 0.001. Batch sizes for the models were chosen to be as large as possible so that they consumed a maximum of 60 % of the available GPU memory. Automatic mixed precision was used during training.
Evaluation
Testing Data, Factors & Metrics
Testing Data
Models were evaluated based on the test splits of both study sites.
Metrics
We used standard instance segmentation metrics, with the implementations from ultralytics
library.
Results
Results for Hiidenportti test data
precision(M) Total | precision(M) Fallen | precision(M) Standing | recall(M) Total | recall(M) Fallen | recall(M) Standing | mAP50(M) Total | mAP50(M) Fallen | mAP50(M) Standing | mAP50-95(M) Total | mAP50-95(M) Fallen | mAP50-95(M) Standing | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
yolov8n_hp | 0.591 | 0.624 | 0.557 | 0.575 | 0.571 | 0.579 | 0.600 | 0.602 | 0.598 | 0.294 | 0.273 | 0.315 |
yolov8n_spk | 0.512 | 0.560 | 0.463 | 0.469 | 0.485 | 0.454 | 0.464 | 0.495 | 0.433 | 0.198 | 0.194 | 0.202 |
yolov8n_both | 0.720 | 0.741 | 0.699 | 0.571 | 0.534 | 0.607 | 0.647 | 0.612 | 0.683 | 0.317 | 0.263 | 0.371 |
yolov8s_hp | 0.688 | 0.679 | 0.697 | 0.581 | 0.563 | 0.599 | 0.643 | 0.613 | 0.672 | 0.325 | 0.280 | 0.370 |
yolov8s_spk | 0.548 | 0.669 | 0.428 | 0.478 | 0.463 | 0.492 | 0.484 | 0.528 | 0.439 | 0.212 | 0.213 | 0.211 |
yolov8s_both | 0.650 | 0.623 | 0.678 | 0.614 | 0.644 | 0.584 | 0.656 | 0.638 | 0.675 | 0.324 | 0.284 | 0.364 |
yolov8m_hp | 0.683 | 0.678 | 0.688 | 0.572 | 0.570 | 0.574 | 0.638 | 0.607 | 0.669 | 0.306 | 0.256 | 0.356 |
yolov8m_spk | 0.609 | 0.702 | 0.516 | 0.563 | 0.539 | 0.587 | 0.551 | 0.591 | 0.512 | 0.256 | 0.254 | 0.258 |
yolov8m_both | 0.676 | 0.643 | 0.710 | 0.619 | 0.637 | 0.602 | 0.671 | 0.638 | 0.703 | 0.338 | 0.286 | 0.390 |
yolov8l_hp | 0.673 | 0.642 | 0.704 | 0.572 | 0.611 | 0.533 | 0.624 | 0.599 | 0.648 | 0.302 | 0.256 | 0.348 |
yolov8l_spk | 0.609 | 0.700 | 0.518 | 0.530 | 0.524 | 0.536 | 0.544 | 0.585 | 0.504 | 0.254 | 0.254 | 0.253 |
yolov8l_both | 0.701 | 0.658 | 0.744 | 0.622 | 0.627 | 0.616 | 0.676 | 0.648 | 0.705 | 0.339 | 0.291 | 0.386 |
yolov8x_hp | 0.656 | 0.607 | 0.705 | 0.600 | 0.614 | 0.587 | 0.635 | 0.630 | 0.640 | 0.317 | 0.285 | 0.350 |
yolov8x_spk | 0.550 | 0.706 | 0.395 | 0.493 | 0.460 | 0.526 | 0.469 | 0.548 | 0.390 | 0.211 | 0.234 | 0.188 |
yolov8x_both | 0.709 | 0.684 | 0.734 | 0.620 | 0.603 | 0.638 | 0.682 | 0.654 | 0.709 | 0.353 | 0.306 | 0.400 |
Resuls for Sudenpesänkangas test data
precision(M) Total | precision(M) Fallen | precision(M) Standing | recall(M) Total | recall(M) Fallen | recall(M) Standing | mAP50(M) Total | mAP50(M) Fallen | mAP50(M) Standing | mAP50-95(M) Total | mAP50-95(M) Fallen | mAP50-95(M) Standing | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
yolov8n_hp | 0.683 | 0.492 | 0.873 | 0.233 | 0.249 | 0.218 | 0.308 | 0.288 | 0.329 | 0.138 | 0.106 | 0.170 |
yolov8n_spk | 0.721 | 0.615 | 0.826 | 0.519 | 0.491 | 0.547 | 0.591 | 0.508 | 0.673 | 0.292 | 0.197 | 0.388 |
yolov8n_both | 0.730 | 0.682 | 0.778 | 0.527 | 0.444 | 0.611 | 0.604 | 0.504 | 0.705 | 0.305 | 0.198 | 0.413 |
yolov8s_hp | 0.586 | 0.446 | 0.726 | 0.342 | 0.347 | 0.336 | 0.414 | 0.331 | 0.497 | 0.187 | 0.121 | 0.253 |
yolov8s_spk | 0.670 | 0.634 | 0.706 | 0.609 | 0.517 | 0.702 | 0.638 | 0.537 | 0.739 | 0.310 | 0.206 | 0.413 |
yolov8s_both | 0.672 | 0.617 | 0.727 | 0.577 | 0.508 | 0.646 | 0.617 | 0.526 | 0.709 | 0.309 | 0.209 | 0.410 |
yolov8m_hp | 0.613 | 0.440 | 0.786 | 0.339 | 0.330 | 0.349 | 0.407 | 0.331 | 0.482 | 0.185 | 0.122 | 0.248 |
yolov8m_spk | 0.720 | 0.604 | 0.835 | 0.556 | 0.529 | 0.583 | 0.635 | 0.525 | 0.744 | 0.317 | 0.215 | 0.420 |
yolov8m_both | 0.716 | 0.639 | 0.792 | 0.581 | 0.515 | 0.647 | 0.646 | 0.535 | 0.757 | 0.336 | 0.225 | 0.447 |
yolov8l_hp | 0.573 | 0.414 | 0.732 | 0.340 | 0.366 | 0.313 | 0.397 | 0.328 | 0.465 | 0.162 | 0.113 | 0.212 |
yolov8l_spk | 0.709 | 0.641 | 0.777 | 0.584 | 0.501 | 0.667 | 0.639 | 0.530 | 0.748 | 0.332 | 0.223 | 0.442 |
yolov8l_both | 0.750 | 0.678 | 0.822 | 0.572 | 0.520 | 0.623 | 0.656 | 0.559 | 0.753 | 0.341 | 0.240 | 0.441 |
yolov8x_hp | 0.675 | 0.543 | 0.807 | 0.322 | 0.362 | 0.282 | 0.421 | 0.385 | 0.457 | 0.185 | 0.141 | 0.229 |
yolov8x_spk | 0.680 | 0.669 | 0.691 | 0.597 | 0.483 | 0.711 | 0.624 | 0.516 | 0.731 | 0.308 | 0.202 | 0.415 |
yolov8x_both | 0.711 | 0.663 | 0.760 | 0.611 | 0.554 | 0.667 | 0.651 | 0.556 | 0.746 | 0.333 | 0.234 | 0.432 |
Citation [optional]
BibTeX:
Added after submitting
Model Card Contact
Janne Mäyrä, @mayrajeo
on GitHub, Hugging Face and many other services.