Update README.md
Browse files
README.md
CHANGED
@@ -7,6 +7,7 @@ datasets:
|
|
7 |
- kakaobrain/coyo-700m
|
8 |
- conceptual_captions
|
9 |
- wanng/wukong100m
|
|
|
10 |
---
|
11 |
|
12 |
# Model Card for InternVL-Chat-Chinese-V1.2-Plus
|
@@ -125,4 +126,4 @@ Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Plat
|
|
125 |
|
126 |
## Acknowledgement
|
127 |
|
128 |
-
InternVL is built with reference to the code of the following projects: [OpenAI CLIP](https://github.com/openai/CLIP), [Open CLIP](https://github.com/mlfoundations/open_clip), [CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark), [EVA](https://github.com/baaivision/EVA/tree/master), [InternImage](https://github.com/OpenGVLab/InternImage), [ViT-Adapter](https://github.com/czczup/ViT-Adapter), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation), [Transformers](https://github.com/huggingface/transformers), [DINOv2](https://github.com/facebookresearch/dinov2), [BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [Qwen-VL](https://github.com/QwenLM/Qwen-VL/tree/master/eval_mm), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work!
|
|
|
7 |
- kakaobrain/coyo-700m
|
8 |
- conceptual_captions
|
9 |
- wanng/wukong100m
|
10 |
+
pipeline_tag: visual-question-answering
|
11 |
---
|
12 |
|
13 |
# Model Card for InternVL-Chat-Chinese-V1.2-Plus
|
|
|
126 |
|
127 |
## Acknowledgement
|
128 |
|
129 |
+
InternVL is built with reference to the code of the following projects: [OpenAI CLIP](https://github.com/openai/CLIP), [Open CLIP](https://github.com/mlfoundations/open_clip), [CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark), [EVA](https://github.com/baaivision/EVA/tree/master), [InternImage](https://github.com/OpenGVLab/InternImage), [ViT-Adapter](https://github.com/czczup/ViT-Adapter), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation), [Transformers](https://github.com/huggingface/transformers), [DINOv2](https://github.com/facebookresearch/dinov2), [BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [Qwen-VL](https://github.com/QwenLM/Qwen-VL/tree/master/eval_mm), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work!
|