robotics-diffusion-transformer
commited on
Commit
•
f062788
1
Parent(s):
2443af2
Update README.md
Browse files
README.md
CHANGED
@@ -29,14 +29,13 @@ RDT takes language instruction, image observations and proprioception as input,
|
|
29 |
The unified action space vector includes all the main physical quantities of robots (e.g. the end-effector and joint, position and velocity, base movement, etc.) and can be applied to a wide range of robotic embodiments.
|
30 |
|
31 |
The pre-trained RDT model can be fine-tuned for specific robotic embodiment and deployed on real-world robots.
|
32 |
-
Here's an example of how to use the RDT-1B model for inference on a Mobile-ALOHA robot
|
33 |
|
34 |
```python
|
35 |
# Clone the repo and install depencies
|
36 |
```
|
37 |
|
38 |
RDT-1B supports finetuning on custom dataset, deploying and inferencing on real-robots, as well as pretraining the model.
|
39 |
-
|
40 |
Please refer to [our repository](https://github.com/GeneralEmbodiedSystem/RoboticsDiffusionTransformer/blob/main/docs/pretrain.md) for all the above guides.
|
41 |
|
42 |
|
|
|
29 |
The unified action space vector includes all the main physical quantities of robots (e.g. the end-effector and joint, position and velocity, base movement, etc.) and can be applied to a wide range of robotic embodiments.
|
30 |
|
31 |
The pre-trained RDT model can be fine-tuned for specific robotic embodiment and deployed on real-world robots.
|
32 |
+
Here's an example of how to use the RDT-1B model for inference on a Mobile-ALOHA robot:
|
33 |
|
34 |
```python
|
35 |
# Clone the repo and install depencies
|
36 |
```
|
37 |
|
38 |
RDT-1B supports finetuning on custom dataset, deploying and inferencing on real-robots, as well as pretraining the model.
|
|
|
39 |
Please refer to [our repository](https://github.com/GeneralEmbodiedSystem/RoboticsDiffusionTransformer/blob/main/docs/pretrain.md) for all the above guides.
|
40 |
|
41 |
|