doubility123 commited on
Commit
d2b8209
1 Parent(s): 40ec0a4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +121 -1
README.md CHANGED
@@ -1,3 +1,123 @@
1
  ---
2
- license: mit
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: other
3
+ license_name: deepseek
4
+ license_link: LICENSE
5
  ---
6
+
7
+ ## 1. Introduction
8
+
9
+ Introducing DeepSeek VL, an open-source Vision-Language (VL) Model designed for real-world vision and language understanding applications. DeepSeek-VL possesses general multimodal understanding capabilities, capable of processing logical diagrams, web pages, formula recognition, scientific literature, natural images, and embodied intelligence in complex scenarios.
10
+
11
+ [DeepSeek-VL: Towards Real-World Vision-Language Understanding](https://arxiv.org/abs/2403.05525)
12
+
13
+ Haoyu Lu*, Wen Liu*, Bo Zhang**, Bingxuan Wang, Kai Dong, Bo Liu, Jingxiang Sun, Tongzheng Ren, Zhuoshu Li, Yaofeng Sun, Chengqi Deng, Hanwei Xu, Zhenda Xie, Chong Ruan (*Equal Contribution, **Project Leader)
14
+
15
+ ![](https://github.com/deepseek-ai/DeepSeek-VL/blob/main/images/sample.jpg)
16
+
17
+
18
+ ### 2. Model Summary
19
+
20
+ DeepSeek-VL-7b-base uses the [SigLIP-L](https://huggingface.co/timm/ViT-L-16-SigLIP-384) and [SAM-B](https://huggingface.co/facebook/sam-vit-base) as the hybrid vision encoder supporting 1024 x 1024 image input
21
+ and is constructed based on the DeepSeek-LLM-7b-base which is trained on an approximate corpus of 2T text tokens. The whole DeepSeek-VL-7b-base model is finally trained around 400B vision-language tokens.
22
+
23
+ ## 3. Quick Start
24
+
25
+ ### Installation
26
+
27
+ On the basis of `Python >= 3.8` environment, install the necessary dependencies by running the following command:
28
+
29
+
30
+ ```shell
31
+ git clone https://github.com/deepseek-ai/DeepSeek-VL
32
+ cd DeepSeek-VL
33
+
34
+ pip install -r requirements.txt -e .
35
+ ```
36
+
37
+ ### Simple Inference Example
38
+
39
+ ```python
40
+ import torch
41
+ from transformers import AutoModelForCausalLM
42
+
43
+ from deepseek_vl.models import VLChatProcessor, MultiModalityCausalLM
44
+ from deepseek_vl.utils.io import load_pil_images
45
+
46
+
47
+ # specify the path to the model
48
+ model_path = "deepseek-ai/deepseek-vl-7b-chat"
49
+ vl_chat_processor: VLChatProcessor = VLChatProcessor.from_pretrained(model_path)
50
+ tokenizer = vl_chat_processor.tokenizer
51
+
52
+ vl_gpt: MultiModalityCausalLM = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)
53
+ vl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()
54
+
55
+ conversation = [
56
+ {
57
+ "role": "User",
58
+ "content": "<image_placeholder>Describe each stage of this image.",
59
+ "images": ["./images/training_pipelines.png"]
60
+ },
61
+ {
62
+ "role": "Assistant",
63
+ "content": ""
64
+ }
65
+ ]
66
+
67
+ # load images and prepare for inputs
68
+ pil_images = load_pil_images(conversation)
69
+ prepare_inputs = vl_chat_processor(
70
+ conversations=conversation,
71
+ images=pil_images,
72
+ force_batchify=True
73
+ ).to(vl_gpt.device)
74
+
75
+ # run image encoder to get the image embeddings
76
+ inputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)
77
+
78
+ # run the model to get the response
79
+ outputs = vl_gpt.language_model.generate(
80
+ inputs_embeds=inputs_embeds,
81
+ attention_mask=prepare_inputs.attention_mask,
82
+ pad_token_id=tokenizer.eos_token_id,
83
+ bos_token_id=tokenizer.bos_token_id,
84
+ eos_token_id=tokenizer.eos_token_id,
85
+ max_new_tokens=512,
86
+ do_sample=False,
87
+ use_cache=True
88
+ )
89
+
90
+ answer = tokenizer.decode(outputs[0].cpu().tolist(), skip_special_tokens=True)
91
+ print(f"{prepare_inputs['sft_format'][0]}", answer)
92
+ ```
93
+
94
+ ### CLI Chat
95
+ ```bash
96
+
97
+ python cli_chat.py --model_path "deepseek-ai/deepseek-vl-7b-chat"
98
+
99
+ # or local path
100
+ python cli_chat.py --model_path "local model path"
101
+
102
+ ```
103
+
104
+ ## 4. License
105
+
106
+ This code repository is licensed under [the MIT License](https://github.com/deepseek-ai/DeepSeek-LLM/blob/HEAD/LICENSE-CODE). The use of DeepSeek LLM Base/Chat models is subject to [the Model License](https://github.com/deepseek-ai/DeepSeek-LLM/blob/HEAD/LICENSE-MODEL). DeepSeek LLM series (including Base and Chat) supports commercial use.
107
+
108
+ ## 5. Citation
109
+
110
+ ```
111
+ @misc{lu2024deepseekvl,
112
+ title={DeepSeek-VL: Towards Real-World Vision-Language Understanding},
113
+ author={Haoyu Lu and Wen Liu and Bo Zhang and Bingxuan Wang and Kai Dong and Bo Liu and Jingxiang Sun and Tongzheng Ren and Zhuoshu Li and Yaofeng Sun and Chengqi Deng and Hanwei Xu and Zhenda Xie and Chong Ruan},
114
+ year={2024},
115
+ eprint={2403.05525},
116
+ archivePrefix={arXiv},
117
+ primaryClass={cs.AI}
118
+ }
119
+ ```
120
+
121
+ ## 6. Contact
122
+
123
+ If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).