Edit model card

This model serves as the baseline for the Aerial Wildfire Suppression environment, trained and tested on task 7 with difficulty 6 using the Proximal Policy Optimization (PPO) algorithm.

Environment: Aerial Wildfire Suppression
Task: 7
Difficulty: 6
Algorithm: PPO
Episode Length: 3000
Training max_steps: 1800000
Testing max_steps: 180000

Train & Test Scripts
Download the Environment

Downloads last month

-

Downloads are not tracked for this model. How to track
Video Preview
loading

Evaluation results

  • Crash Count on hivex-aerial-wildfire-suppression
    self-reported
    0.07729871394112706 +/- 0.10564317145739546
  • Cumulative Reward on hivex-aerial-wildfire-suppression
    self-reported
    73.41971187591552 +/- 27.54676335258844