File size: 2,994 Bytes
0a65aa2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
---
tags:
- Qbert-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
  results:
  - task:
      type: reinforcement-learning
      name: reinforcement-learning
    dataset:
      name: Qbert-v5
      type: Qbert-v5
    metrics:
    - type: mean_reward
      value: 21432.50 +/- 3557.67
      name: mean_reward
      verified: false
---

# (CleanRL) **PPO** Agent Playing **Qbert-v5**

This is a trained model of a PPO agent playing Qbert-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).

## Get Started

To use this model, please install the `cleanrl` package with the following command:

```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id Qbert-v5
```

Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.


## Command to reproduce the training

```bash
curl -OL https://huggingface.co/cleanrl/Qbert-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Qbert-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Qbert-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Qbert-v5 --seed 3
```

# Hyperparameters
```python
{'actor_device_ids': [0],
 'actor_devices': ['gpu:0'],
 'anneal_lr': True,
 'async_batch_size': 30,
 'async_update': 1,
 'batch_size': 2400,
 'capture_video': False,
 'cuda': True,
 'distributed': True,
 'ent_coef': 0.01,
 'env_id': 'Qbert-v5',
 'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
 'gamma': 0.99,
 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
 'hf_entity': 'cleanrl',
 'learner_device_ids': [1],
 'learner_devices': ['gpu:1'],
 'learning_rate': 0.00025,
 'local_batch_size': 600,
 'local_minibatch_size': 300,
 'local_num_envs': 30,
 'local_rank': 0,
 'max_grad_norm': 0.5,
 'minibatch_size': 1200,
 'num_envs': 120,
 'num_minibatches': 2,
 'num_steps': 20,
 'num_updates': 20833,
 'profile': False,
 'save_model': True,
 'seed': 3,
 'target_kl': None,
 'test_actor_learner_throughput': False,
 'torch_deterministic': True,
 'total_timesteps': 50000000,
 'track': True,
 'upload_model': True,
 'vf_coef': 0.5,
 'wandb_entity': None,
 'wandb_project_name': 'cleanba',
 'world_size': 4}
```