Safetensors
llava_llama
wentao-yuan commited on
Commit
b2fa089
1 Parent(s): 6697201

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -0
README.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - wentao-yuan/robopoint-data
5
+ base_model:
6
+ - meta-llama/Llama-2-13b-chat-hf
7
+ ---
8
+ # RoboPoint-v1-Llama2-13B-LoRA
9
+ RoboPoint is an open-source vision-language model instruction-tuned on a mix of robotics and VQA data. Given an image with language instructions, it outputs precise action guidance as points.
10
+
11
+ ## Primary Use Cases
12
+ RoboPoint can predict spatial affordances—where actions should be taken in relation to other entities—based on instructions. For example, it can identify free space on a shelf in front of the rightmost object.
13
+
14
+ ## Model Details
15
+ This model was fine-tuned using [LoRA](https://arxiv.org/abs/2106.09685) from [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) and has 13 billion parameters.
16
+
17
+ ## Date
18
+ This model was trained in June 2024.
19
+
20
+ ## Resources for More Information
21
+
22
+ - Paper: https://arxiv.org/pdf/2406.10721
23
+ - Code: https://github.com/wentaoyuan/RoboPoint
24
+ - Website: https://robo-point.github.io
25
+
26
+ ## Training dataset
27
+ See [wentao-yuan/robopoint-data](https://huggingface.co/datasets/wentao-yuan/robopoint-data).
28
+
29
+ ## Citation
30
+ If you find our work helpful, please consider citing our paper.
31
+ ```
32
+ @article{yuan2024robopoint,
33
+ title={RoboPoint: A Vision-Language Model for Spatial Affordance Prediction for Robotics},
34
+ author={Yuan, Wentao and Duan, Jiafei and Blukis, Valts and Pumacay, Wilbert and Krishna, Ranjay and Murali, Adithyavairavan and Mousavian, Arsalan and Fox, Dieter},
35
+ journal={arXiv preprint arXiv:2406.10721},
36
+ year={2024}
37
+ }
38
+ ```