--- license: apache-2.0 datasets: - JayLee131/vqbet_pusht pipeline_tag: robotics --- # Model Card for VQ-BeT/PushT VQ-BeT (as per [Behavior Generation with Latent Actions](https://arxiv.org/abs/2403.03181)) trained for the `PushT` environment from [gym-pusht](https://github.com/huggingface/gym-pusht). ## How to Get Started with the Model See the [LeRobot library](https://github.com/huggingface/lerobot) (particularly the [evaluation script](https://github.com/huggingface/lerobot/blob/main/lerobot/scripts/eval.py)) for instructions on how to load and evaluate this model. ## Training Details The model was trained using this command: ```bash python lerobot/scripts/train.py \ policy=vqbet \ env=pusht dataset_repo_id=lerobot/pusht \ wandb.enable=true \ device=cuda ``` This took about 7 hours to train on an Nvida A6000. ## Model Size |Number of Parameters -|- RGB Encoder | 11.2M Remaining VQ-BeT Parts | 26.3M ## Evaluation The model was evaluated on the `PushT` environment from [gym-pusht](https://github.com/huggingface/gym-pusht). There are two evaluation metrics on a per-episode basis: - Maximum overlap with target (seen as `eval/avg_max_reward` in the charts above). This ranges in [0, 1]. - Success: whether or not the maximum overlap is at least 95%. Here are the metrics for 500 episodes worth of evaluation. |Ours -|- Average max. overlap ratio for 500 episodes | 0.887 Success rate for 500 episodes (%) | 66.0 The results of each of the individual rollouts may be found in [eval_info.json](eval_info.json).