Commit
2443af2
1 Parent(s): f03ec58

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -3
README.md CHANGED
@@ -25,10 +25,15 @@ Please refer to our [project page](https://rdt-robotics.github.io/rdt-robotics/)
25
 
26
  ## Uses
27
 
28
- RDT takes language instruction, image observations and proprioception as input, and predicts the next 64 robot actions in the form of unified action space vector,
29
- including all the main physical quantities of robots, including the end-effector and joint, position and velocity, base movement, etc.
30
 
31
- ### Getting Started
 
 
 
 
 
32
 
33
  RDT-1B supports finetuning on custom dataset, deploying and inferencing on real-robots, as well as pretraining the model.
34
 
 
25
 
26
  ## Uses
27
 
28
+ RDT takes language instruction, image observations and proprioception as input, and predicts the next 64 robot actions in the form of unified action space vector.
29
+ The unified action space vector includes all the main physical quantities of robots (e.g. the end-effector and joint, position and velocity, base movement, etc.) and can be applied to a wide range of robotic embodiments.
30
 
31
+ The pre-trained RDT model can be fine-tuned for specific robotic embodiment and deployed on real-world robots.
32
+ Here's an example of how to use the RDT-1B model for inference on a Mobile-ALOHA robot:```python
33
+
34
+ ```python
35
+ # Clone the repo and install depencies
36
+ ```
37
 
38
  RDT-1B supports finetuning on custom dataset, deploying and inferencing on real-robots, as well as pretraining the model.
39