Edit model card

This model serves as the baseline for the Aerial Wildfire Suppression environment, trained and tested on task 7 with difficulty 7 using the Proximal Policy Optimization (PPO) algorithm.

Environment: Aerial Wildfire Suppression
Task: 7
Difficulty: 7
Algorithm: PPO
Episode Length: 3000
Training max_steps: 1800000
Testing max_steps: 180000

Train & Test Scripts
Download the Environment

Downloads last month

-

Downloads are not tracked for this model. How to track
Video Preview
loading

Evaluation results

  • Crash Count on hivex-aerial-wildfire-suppression
    self-reported
    0.10861299121752381 +/- 0.07602076371931286
  • Cumulative Reward on hivex-aerial-wildfire-suppression
    self-reported
    78.5054485321045 +/- 15.433723402678817