|
--- |
|
library_name: hivex |
|
original_train_name: OceanPlasticCollection_task_1_run_id_0_train |
|
tags: |
|
- hivex |
|
- hivex-ocean-plastic-collection |
|
- reinforcement-learning |
|
- multi-agent-reinforcement-learning |
|
model-index: |
|
- name: hivex-OPC-PPO-baseline-task-1 |
|
results: |
|
- task: |
|
type: sub-task |
|
name: find_highest_polluted_area |
|
task-id: 1 |
|
dataset: |
|
name: hivex-ocean-plastic-collection |
|
type: hivex-ocean-plastic-collection |
|
metrics: |
|
- type: cumulative_reward |
|
value: 994.6653747558594 +/- 158.13702190020126 |
|
name: "Cumulative Reward" |
|
verified: true |
|
- type: global_reward |
|
value: 226.50474700927734 +/- 57.553598550844015 |
|
name: "Global Reward" |
|
verified: true |
|
- type: local_reward |
|
value: 142.19907608032227 +/- 19.368785745326573 |
|
name: "Local Reward" |
|
verified: true |
|
--- |
|
|
|
This model serves as the baseline for the **Ocean Plastic Collection** environment, trained and tested on task <code>1</code> using the Proximal Policy Optimization (PPO) algorithm.<br> |
|
<br> |
|
Environment: **Ocean Plastic Collection**<br> |
|
Task: <code>1</code><br> |
|
Algorithm: <code>PPO</code><br> |
|
Episode Length: <code>5000</code><br> |
|
Training <code>max_steps</code>: <code>3000000</code><br> |
|
Testing <code>max_steps</code>: <code>150000</code><br> |
|
<br> |
|
Train & Test [Scripts](https://github.com/hivex-research/hivex)<br> |
|
Download the [Environment](https://github.com/hivex-research/hivex-environments) |