Update README.md
Browse files
README.md
CHANGED
@@ -1,52 +1,52 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
datasets:
|
4 |
-
- FreedomIntelligence/PubMedVision
|
5 |
-
language:
|
6 |
-
- en
|
7 |
-
- zh
|
8 |
-
pipeline_tag: text-generation
|
9 |
-
---
|
10 |
-
<div align="center">
|
11 |
-
<h1>
|
12 |
-
HuatuoGPT-Vision-7B
|
13 |
-
</h1>
|
14 |
-
</div>
|
15 |
-
|
16 |
-
<div align="center">
|
17 |
-
<a href="https://github.com/FreedomIntelligence/HuatuoGPT-Vision" target="_blank">GitHub</a> | <a href="https://arxiv.org/abs/2406.19280" target="_blank">Paper</a>
|
18 |
-
</div>
|
19 |
-
|
20 |
-
# <span id="Start">Introduction</span>
|
21 |
-
HuatuoGPT-Vision is a multimodal LLM for medical applications, built with the [PubMedVision dataset](https://huggingface.co/datasets/FreedomIntelligence/PubMedVision). HuatuoGPT-Vision-7B is trained based on Qwen2-7B using the LLaVA-v1.5 architecture.
|
22 |
-
|
23 |
-
# <span id="Start">Quick Start</span>
|
24 |
-
|
25 |
-
1. Get the model inference code from [Github](https://github.com/FreedomIntelligence/HuatuoGPT-Vision).
|
26 |
-
```bash
|
27 |
-
git clone https://github.com/FreedomIntelligence/HuatuoGPT-Vision.git
|
28 |
-
```
|
29 |
-
2. Model inference
|
30 |
-
```python
|
31 |
-
query = 'What does the picture show?'
|
32 |
-
image_paths = ['image_path1']
|
33 |
-
|
34 |
-
from cli import HuatuoChatbot
|
35 |
-
bot = HuatuoChatbot(huatuogpt_vision_model_path) #
|
36 |
-
output = bot.inference(query, image_paths) #
|
37 |
-
print(output) # Prints the model output
|
38 |
-
```
|
39 |
-
|
40 |
-
# <span id="Start">Citation</span>
|
41 |
-
|
42 |
-
```
|
43 |
-
@misc{chen2024huatuogptvisioninjectingmedicalvisual,
|
44 |
-
title={HuatuoGPT-Vision, Towards Injecting Medical Visual Knowledge into Multimodal LLMs at Scale},
|
45 |
-
author={Junying Chen and Ruyi Ouyang and Anningzhe Gao and Shunian Chen and Guiming Hardy Chen and Xidong Wang and Ruifei Zhang and Zhenyang Cai and Ke Ji and Guangjun Yu and Xiang Wan and Benyou Wang},
|
46 |
-
year={2024},
|
47 |
-
eprint={2406.19280},
|
48 |
-
archivePrefix={arXiv},
|
49 |
-
primaryClass={cs.CV},
|
50 |
-
url={https://arxiv.org/abs/2406.19280},
|
51 |
-
}
|
52 |
-
```
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- FreedomIntelligence/PubMedVision
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
- zh
|
8 |
+
pipeline_tag: text-generation
|
9 |
+
---
|
10 |
+
<div align="center">
|
11 |
+
<h1>
|
12 |
+
HuatuoGPT-Vision-7B
|
13 |
+
</h1>
|
14 |
+
</div>
|
15 |
+
|
16 |
+
<div align="center">
|
17 |
+
<a href="https://github.com/FreedomIntelligence/HuatuoGPT-Vision" target="_blank">GitHub</a> | <a href="https://arxiv.org/abs/2406.19280" target="_blank">Paper</a>
|
18 |
+
</div>
|
19 |
+
|
20 |
+
# <span id="Start">Introduction</span>
|
21 |
+
HuatuoGPT-Vision is a multimodal LLM for medical applications, built with the [PubMedVision dataset](https://huggingface.co/datasets/FreedomIntelligence/PubMedVision). HuatuoGPT-Vision-7B is trained based on Qwen2-7B using the LLaVA-v1.5 architecture.
|
22 |
+
|
23 |
+
# <span id="Start">Quick Start</span>
|
24 |
+
|
25 |
+
1. Get the model inference code from [Github](https://github.com/FreedomIntelligence/HuatuoGPT-Vision).
|
26 |
+
```bash
|
27 |
+
git clone https://github.com/FreedomIntelligence/HuatuoGPT-Vision.git
|
28 |
+
```
|
29 |
+
2. Model inference
|
30 |
+
```python
|
31 |
+
query = 'What does the picture show?'
|
32 |
+
image_paths = ['image_path1']
|
33 |
+
|
34 |
+
from cli import HuatuoChatbot
|
35 |
+
bot = HuatuoChatbot(huatuogpt_vision_model_path) # loads the model
|
36 |
+
output = bot.inference(query, image_paths) # generates
|
37 |
+
print(output) # Prints the model output
|
38 |
+
```
|
39 |
+
|
40 |
+
# <span id="Start">Citation</span>
|
41 |
+
|
42 |
+
```
|
43 |
+
@misc{chen2024huatuogptvisioninjectingmedicalvisual,
|
44 |
+
title={HuatuoGPT-Vision, Towards Injecting Medical Visual Knowledge into Multimodal LLMs at Scale},
|
45 |
+
author={Junying Chen and Ruyi Ouyang and Anningzhe Gao and Shunian Chen and Guiming Hardy Chen and Xidong Wang and Ruifei Zhang and Zhenyang Cai and Ke Ji and Guangjun Yu and Xiang Wan and Benyou Wang},
|
46 |
+
year={2024},
|
47 |
+
eprint={2406.19280},
|
48 |
+
archivePrefix={arXiv},
|
49 |
+
primaryClass={cs.CV},
|
50 |
+
url={https://arxiv.org/abs/2406.19280},
|
51 |
+
}
|
52 |
+
```
|