deepcs233's picture
Update README.md
081ca1b
---
license: apache-2.0
---
# LMDrive Model Card
## Model details
**Model type:**
LMDrive is an end-to-end, closed-loop, language-based autonomous driving framework, which interacts with the dynamic environment via multi-modal multi-view sensor data and natural language instructions.
**Model date:**
LMDrive-1.0 (based on LLaVA-v1.5-7B) was trained in November 2023. The original LLaVA-v1.5 also needs to be downloaded.
**Paper or resources for more information:**
Github: https://github.com/opendilab/LMDrive/README.md
Paper: https://arxiv.org/abs/2312.07488
**Related weights for the vision encoder**
https://huggingface.co/deepcs233/LMDrive-vision-encoder-r50-v1.0
**Where to send questions or comments about the model:**
https://github.com/opendilab/LMDrive/issues
## Intended use
**Primary intended uses:**
The primary use of LMDrive is research on large multimodal models for autonomous driving.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, large multimodal model, autonomous driving, and artificial intelligence.
## Training dataset
- 64K instruction-sensor-control data clips collected in the CARLA simulator. [dataset_webpage](https://huggingface.co/datasets/deepcs233/LMDrive)
- where each clip includes one navigation instruction, several notice instructions, a sequence of multi-modal multi-view sensor data, and control signals. The duration of the clip spans from 2 to 20 seconds
## Evaluation benchmark
LangAuto, LangAuto-short, LangAuto-tiny, LangAuto-notice