--- library_name: hivex original_train_name: WildfireResourceManagement_difficulty_6_task_0_run_id_0_train tags: - hivex - hivex-wildfire-resource-management - reinforcement-learning - multi-agent-reinforcement-learning model-index: - name: hivex-WRM-PPO-baseline-task-0-difficulty-6 results: - task: type: main-task name: main_task task-id: 0 difficulty-id: 6 dataset: name: hivex-wildfire-resource-management type: hivex-wildfire-resource-management metrics: - type: cumulative_reward value: 107.97416267395019 +/- 48.20278489614717 name: Cumulative Reward verified: true - type: collective_performance value: 46.81386375427246 +/- 21.01886010155952 name: Collective Performance verified: true - type: individual_performance value: 23.57359619140625 +/- 10.460954413923357 name: Individual Performance verified: true - type: reward_for_moving_resources_to_neighbours value: 57.37900886535645 +/- 29.412938343401375 name: Reward for Moving Resources to Neighbours verified: true - type: reward_for_moving_resources_to_self value: 0.5378097414970398 +/- 0.316862991497751 name: Reward for Moving Resources to Self verified: true --- This model serves as the baseline for the **Wildfire Resource Management** environment, trained and tested on task 0 with difficulty 6 using the Proximal Policy Optimization (PPO) algorithm.

Environment: **Wildfire Resource Management**
Task: 0
Difficulty: 6
Algorithm: PPO
Episode Length: 500
Training max_steps: 450000
Testing max_steps: 45000

Train & Test [Scripts](https://github.com/hivex-research/hivex)
Download the [Environment](https://github.com/hivex-research/hivex-environments)