File size: 2,602 Bytes
36d1487
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
tags:
- BeamRider-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
  results:
  - task:
      type: reinforcement-learning
      name: reinforcement-learning
    dataset:
      name: BeamRider-v5
      type: BeamRider-v5
    metrics:
    - type: mean_reward
      value: 11643.20 +/- 4100.01
      name: mean_reward
      verified: false
---

# (CleanRL) **PPO** Agent Playing **BeamRider-v5**

This is a trained model of a PPO agent playing BeamRider-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).

## Get Started

To use this model, please install the `cleanrl` package with the following command:

```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id BeamRider-v5
```

Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.


## Command to reproduce the training

```bash
curl -OL https://huggingface.co/cleanrl/BeamRider-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/BeamRider-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/BeamRider-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id BeamRider-v5 --seed 1
```

# Hyperparameters
```python
{'actor_device_ids': [0],
 'anneal_lr': True,
 'async_batch_size': 16,
 'async_update': 4,
 'batch_size': 8192,
 'capture_video': False,
 'clip_coef': 0.1,
 'cuda': True,
 'ent_coef': 0.01,
 'env_id': 'BeamRider-v5',
 'exp_name': 'sebulba_ppo_envpool',
 'gae_lambda': 0.95,
 'gamma': 0.99,
 'hf_entity': 'cleanrl',
 'learner_device_ids': [1, 2, 3, 4],
 'learning_rate': 0.00025,
 'max_grad_norm': 0.5,
 'minibatch_size': 2048,
 'norm_adv': True,
 'num_actor_threads': 1,
 'num_envs': 64,
 'num_minibatches': 4,
 'num_steps': 128,
 'num_updates': 6103,
 'params_queue_timeout': 0.02,
 'profile': False,
 'save_model': True,
 'seed': 1,
 'target_kl': None,
 'test_actor_learner_throughput': False,
 'torch_deterministic': True,
 'total_timesteps': 50000000,
 'track': True,
 'update_epochs': 4,
 'upload_model': True,
 'vf_coef': 0.5,
 'wandb_entity': None,
 'wandb_project_name': 'cleanRL'}
```