Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
stinoco
/
PPO-LunarLander-v2
like
0
Reinforcement Learning
Transformers
TensorBoard
LunarLander-v2
ppo
deep-reinforcement-learning
custom-implementation
deep-rl-course
Eval Results
Inference Endpoints
Model card
Files
Files and versions
Metrics
Training metrics
Community
Train
Deploy
Use this model
64c188b
PPO-LunarLander-v2
/
first_PPO
1 contributor
History:
1 commit
stinoco
Adding PPO model for solving LunarLander-v2
64c188b
almost 2 years ago
_stable_baselines3_version
Safe
5 Bytes
Adding PPO model for solving LunarLander-v2
almost 2 years ago
data
Safe
14.7 kB
Adding PPO model for solving LunarLander-v2
almost 2 years ago
policy.optimizer.pth
Safe
pickle
Detected Pickle imports (3)
"torch.FloatStorage"
,
"torch._utils._rebuild_tensor_v2"
,
"collections.OrderedDict"
What is a pickle import?
87.9 kB
LFS
Adding PPO model for solving LunarLander-v2
almost 2 years ago
policy.pth
Safe
pickle
Detected Pickle imports (3)
"torch._utils._rebuild_tensor_v2"
,
"torch.FloatStorage"
,
"collections.OrderedDict"
What is a pickle import?
43.4 kB
LFS
Adding PPO model for solving LunarLander-v2
almost 2 years ago
pytorch_variables.pth
Safe
pickle
Pickle imports
No problematic imports detected
What is a pickle import?
431 Bytes
LFS
Adding PPO model for solving LunarLander-v2
almost 2 years ago
system_info.txt
Safe
199 Bytes
Adding PPO model for solving LunarLander-v2
almost 2 years ago