ppo-LunarLander-v2 / results.json
augustocsc's picture
first trained model using PPO method to LunarLander-v2 env
b2149f9
raw
history blame
164 Bytes
{"mean_reward": 281.5176592940188, "std_reward": 16.907409538064478, "is_deterministic": true, "n_eval_episodes": 10, "eval_datetime": "2023-02-27T21:13:26.015824"}