rdt-170m / README.md
robotics-diffusion-transformer's picture
Update README.md
8aa386c verified
|
raw
history blame
5.54 kB
metadata
license: mit
language:
  - en
pipeline_tag: robotics
library_name: transformers
tags:
  - robotics
  - pytorch
  - multimodal
  - pretraining
  - vla
  - diffusion
  - rdt

RDT-170M

RDT-170M is a 170M-parameter imitation learning Diffusion Transformer (RDT(small) in ablation). It has a hidden size of 1024 and a depth of 14, which are half of that in RDT-1B. Given language instruction and RGB images of up to three views, RDT can predict the next 64 robot actions. RDT is compatible with almost all modern mobile manipulators, from single-arm to dual-arm, joint to EEF, position to velocity, and even wheeled locomotion.

All the code, pre-trained model weights, and data are licensed under the MIT license.

Please refer to our project page and paper for more information.

Model Details

Uses

RDT takes language instruction, RGB images (of up to three views), control frequency (if any), and proprioception as input and predicts the next 64 robot actions. RDT supports control of almost all robot manipulators with the help of the unified action space, which includes all the main physical quantities of the robot manipulator (e.g., the end-effector and joint, position and velocity, and the wheeled locomotion). To deploy on your robot platform, you need to fill the relevant quantities of the raw action vector into the unified space vector. See our repository for more information.

Out-of-Scope: Due to the embodiment gap, RDT cannot yet generalize to new robot platforms (not seen in the pre-training datasets). In this case, we recommend collecting a small dataset of the target robot and then using it to fine-tune RDT. See our repository for a tutorial.

Here's an example of how to use the RDT-1B model for inference on a robot:

# Please first clone the repository and install dependencies
# Then switch to the root directory of the repository by "cd RoboticsDiffusionTransformer"

# Import a create function from the code base
from scripts.agilex_model import create_model

# Names of cameras used for visual input
CAMERA_NAMES = ['cam_high', 'cam_right_wrist', 'cam_left_wrist']
config = {
    'episode_len': 1000,  # Max length of one episode
    'state_dim': 14,      # Dimension of the robot's state
    'chunk_size': 64,     # Number of actions to predict in one step
    'camera_names': CAMERA_NAMES,
}
pretrained_vision_encoder_name_or_path = "google/siglip-so400m-patch14-384" 
# Create the model with the specified configuration
model = create_model(
    args=config,
    dtype=torch.bfloat16, 
    pretrained_vision_encoder_name_or_path=pretrained_vision_encoder_name_or_path,
    pretrained='robotics-diffusion-transformer/rdt-1b',
    control_frequency=25,
)

# Start inference process
# Load the pre-computed language embeddings
# Refer to scripts/encode_lang.py for how to encode the language instruction
lang_embeddings_path = 'your/language/embedding/path'
text_embedding = torch.load(lang_embeddings_path)['embeddings']  
images: List(PIL.Image) = ... #  The images from last 2 frames
proprio = ... # The current robot state
# Perform inference to predict the next `chunk_size` actions
actions = policy.step(
    proprio=proprio,
    images=images,
    text_embeds=text_embedding 
)

Citation

If you find our work helpful, please cite us:

@article{liu2024rdt,
  title={RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation},
  author={Liu, Songming and Wu, Lingxuan and Li, Bangguo and Tan, Hengkai and Chen, Huayu and Wang, Zhengyi and Xu, Ke and Su, Hang and Zhu, Jun},
  journal={arXiv preprint arXiv:2410.07864},
  year={2024}
}

Thank you!