PPO playing QbertNoFrameskip-v4 from https://github.com/sgoodfriend/rl-algo-impls/tree/0511de345b17175b7cf1ea706c3e05981f11761c
460072a
library_name: rl-algo-impls | |
tags: | |
- QbertNoFrameskip-v4 | |
- ppo | |
- deep-reinforcement-learning | |
- reinforcement-learning | |
model-index: | |
- name: ppo | |
results: | |
- metrics: | |
- type: mean_reward | |
value: 13079.69 +/- 3555.52 | |
name: mean_reward | |
task: | |
type: reinforcement-learning | |
name: reinforcement-learning | |
dataset: | |
name: QbertNoFrameskip-v4 | |
type: QbertNoFrameskip-v4 | |
# **PPO** Agent playing **QbertNoFrameskip-v4** | |
This is a trained model of a **PPO** agent playing **QbertNoFrameskip-v4** using the [/sgoodfriend/rl-algo-impls](https://github.com/sgoodfriend/rl-algo-impls) repo. | |
All models trained at this commit can be found at https://api.wandb.ai/links/sgoodfriend/7lx79bf0. | |
## Training Results | |
This model was trained from 3 trainings of **PPO** agents using different initial seeds. These agents were trained by checking out [0511de3](https://github.com/sgoodfriend/rl-algo-impls/tree/0511de345b17175b7cf1ea706c3e05981f11761c). The best and last models were kept from each training. This submission has loaded the best models from each training, reevaluates them, and selects the best model from these latest evaluations (mean - std). | |
| algo | env | seed | reward_mean | reward_std | eval_episodes | best | wandb_url | | |
|:-------|:--------------------|-------:|--------------:|-------------:|----------------:|:-------|:-----------------------------------------------------------------------------| | |
| ppo | QbertNoFrameskip-v4 | 1 | 13079.7 | 3555.52 | 16 | * | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/dkzlwap0) | | |
| ppo | QbertNoFrameskip-v4 | 2 | 12012.5 | 3064.83 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/v3mjc432) | | |
| ppo | QbertNoFrameskip-v4 | 3 | 11512.5 | 3495.89 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/ek6r88yr) | | |
### Prerequisites: Weights & Biases (WandB) | |
Training and benchmarking assumes you have a Weights & Biases project to upload runs to. | |
By default training goes to a rl-algo-impls project while benchmarks go to | |
rl-algo-impls-benchmarks. During training and benchmarking runs, videos of the best | |
models and the model weights are uploaded to WandB. | |
Before doing anything below, you'll need to create a wandb account and run `wandb | |
login`. | |
## Usage | |
/sgoodfriend/rl-algo-impls: https://github.com/sgoodfriend/rl-algo-impls | |
Note: While the model state dictionary and hyperaparameters are saved, the latest | |
implementation could be sufficiently different to not be able to reproduce similar | |
results. You might need to checkout the commit the agent was trained on: | |
[0511de3](https://github.com/sgoodfriend/rl-algo-impls/tree/0511de345b17175b7cf1ea706c3e05981f11761c). | |
``` | |
# Downloads the model, sets hyperparameters, and runs agent for 3 episodes | |
python enjoy.py --wandb-run-path=sgoodfriend/rl-algo-impls-benchmarks/dkzlwap0 | |
``` | |
Setup hasn't been completely worked out yet, so you might be best served by using Google | |
Colab starting from the | |
[colab_enjoy.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_enjoy.ipynb) | |
notebook. | |
## Training | |
If you want the highest chance to reproduce these results, you'll want to checkout the | |
commit the agent was trained on: [0511de3](https://github.com/sgoodfriend/rl-algo-impls/tree/0511de345b17175b7cf1ea706c3e05981f11761c). While | |
training is deterministic, different hardware will give different results. | |
``` | |
python train.py --algo ppo --env QbertNoFrameskip-v4 --seed 1 | |
``` | |
Setup hasn't been completely worked out yet, so you might be best served by using Google | |
Colab starting from the | |
[colab_train.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_train.ipynb) | |
notebook. | |
## Benchmarking (with Lambda Labs instance) | |
This and other models from https://api.wandb.ai/links/sgoodfriend/7lx79bf0 were generated by running a script on a Lambda | |
Labs instance. In a Lambda Labs instance terminal: | |
``` | |
git clone [email protected]:sgoodfriend/rl-algo-impls.git | |
cd rl-algo-impls | |
bash ./lambda_labs/setup.sh | |
wandb login | |
bash ./lambda_labs/benchmark.sh [-a {"ppo a2c dqn vpg"}] [-e ENVS] [-j {6}] [-p {rl-algo-impls-benchmarks}] [-s {"1 2 3"}] | |
``` | |
### Alternative: Google Colab Pro+ | |
As an alternative, | |
[colab_benchmark.ipynb](https://github.com/sgoodfriend/rl-algo-impls/tree/main/benchmarks#:~:text=colab_benchmark.ipynb), | |
can be used. However, this requires a Google Colab Pro+ subscription and running across | |
4 separate instances because otherwise running all jobs will exceed the 24-hour limit. | |
## Hyperparameters | |
This isn't exactly the format of hyperparams in hyperparams/ppo.yml, but instead the Wandb Run Config. However, it's very | |
close and has some additional data: | |
``` | |
additional_keys_to_log: [] | |
algo: ppo | |
algo_hyperparams: | |
batch_size: 256 | |
clip_range: 0.1 | |
clip_range_decay: linear | |
ent_coef: 0.01 | |
learning_rate: 0.00025 | |
learning_rate_decay: linear | |
n_epochs: 4 | |
n_steps: 128 | |
vf_coef: 0.5 | |
device: auto | |
env: QbertNoFrameskip-v4 | |
env_hyperparams: | |
frame_stack: 4 | |
n_envs: 8 | |
no_reward_fire_steps: 500 | |
no_reward_timeout_steps: 1000 | |
vec_env_class: async | |
env_id: null | |
eval_params: | |
deterministic: false | |
n_timesteps: 10000000 | |
policy_hyperparams: | |
activation_fn: relu | |
seed: 1 | |
use_deterministic_algorithms: true | |
wandb_entity: null | |
wandb_group: null | |
wandb_project_name: rl-algo-impls-benchmarks | |
wandb_tags: | |
- benchmark_0511de3 | |
- host_152-67-249-42 | |
- branch_main | |
- v0.0.8 | |
``` | |