Commit
f03ec58
1 Parent(s): ba0ed2f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -9
README.md CHANGED
@@ -1,29 +1,36 @@
1
  ---
2
  license: mit
 
 
3
  ---
4
  # RDT-1B
5
 
6
  RDT-1B is a 1B-parameter imitation learning Diffusion Transformer pre-trained on 1M+ multi-robot episodes. Given a language instruction and 3-view RGB image observations, RDT can predict the next
7
  64 robot actions. RDT is inherently compatible with almost all kinds of modern mobile manipulators, from single-arm to dual-arm, joint to EEF, pos. to vel., and even with a mobile chassis.
8
 
9
- All the code and model weights are licensed under MIT license.
10
 
11
- Please refer to our [project page](), [github repository]() and [paper]() for more information.
12
 
13
  ## Model Details
14
 
15
- - **Developed by** Thu-ml team
16
  - **License:** MIT
17
- - **Pretrain dataset:** [More Information Needed]
18
- - **Finetune dataset:** [More Information Needed]
19
-
20
- - **Repository:** [More Information Needed]
21
- - **Paper :** [More Information Needed]
22
  - **Project Page:** https://rdt-robotics.github.io/rdt-robotics/
23
 
24
  ## Uses
25
 
26
- RDT-1B supports finetuning and pre-training on custom dataset, as well as deploying and inferencing on real-robots.
 
 
 
 
 
27
 
28
  Please refer to [our repository](https://github.com/GeneralEmbodiedSystem/RoboticsDiffusionTransformer/blob/main/docs/pretrain.md) for all the above guides.
29
 
 
1
  ---
2
  license: mit
3
+ language:
4
+ - en
5
  ---
6
  # RDT-1B
7
 
8
  RDT-1B is a 1B-parameter imitation learning Diffusion Transformer pre-trained on 1M+ multi-robot episodes. Given a language instruction and 3-view RGB image observations, RDT can predict the next
9
  64 robot actions. RDT is inherently compatible with almost all kinds of modern mobile manipulators, from single-arm to dual-arm, joint to EEF, pos. to vel., and even with a mobile chassis.
10
 
11
+ All the [code]() and pretrained model weights are licensed under MIT license.
12
 
13
+ Please refer to our [project page](https://rdt-robotics.github.io/rdt-robotics/) and [paper]() for more information.
14
 
15
  ## Model Details
16
 
17
+ - **Developed by** RDT Team from Tsinghua University.
18
  - **License:** MIT
19
+ - **Language(s) (NLP):** en
20
+ - **Model Architecture:** Diffusion Transformer.
21
+ - **Pretrain dataset:** Curated pretrain dataset collected from 46 datasets. Please see [here]() for detail.
22
+ - **Repository:** [repo_url]
23
+ - **Paper :** [paper_url]
24
  - **Project Page:** https://rdt-robotics.github.io/rdt-robotics/
25
 
26
  ## Uses
27
 
28
+ RDT takes language instruction, image observations and proprioception as input, and predicts the next 64 robot actions in the form of unified action space vector,
29
+ including all the main physical quantities of robots, including the end-effector and joint, position and velocity, base movement, etc.
30
+
31
+ ### Getting Started
32
+
33
+ RDT-1B supports finetuning on custom dataset, deploying and inferencing on real-robots, as well as pretraining the model.
34
 
35
  Please refer to [our repository](https://github.com/GeneralEmbodiedSystem/RoboticsDiffusionTransformer/blob/main/docs/pretrain.md) for all the above guides.
36