Edit model card

This model serves as the baseline for the Aerial Wildfire Suppression environment, trained and tested on task 7 with difficulty 3 using the Proximal Policy Optimization (PPO) algorithm.

Environment: Aerial Wildfire Suppression
Task: 7
Difficulty: 3
Algorithm: PPO
Episode Length: 3000
Training max_steps: 1800000
Testing max_steps: 180000

Train & Test Scripts
Download the Environment

Downloads last month

-

Downloads are not tracked for this model. How to track
Video Preview
loading

Evaluation results

  • Crash Count on hivex-aerial-wildfire-suppression
    self-reported
    0.10552959488704801 +/- 0.08593024068174016
  • Cumulative Reward on hivex-aerial-wildfire-suppression
    self-reported
    72.42410793304444 +/- 42.89525372722262