Update README.md
Browse files
README.md
CHANGED
@@ -3,8 +3,9 @@ license: apache-2.0
|
|
3 |
pipeline_tag: image-text-to-text
|
4 |
---
|
5 |
|
6 |
-
|
7 |
-
|
|
|
8 |
TinyLLaVA has released a family of small-scale Large Multimodel Models(LMMs), ranging from 1.4B to 3.1B. Our best model, TinyLLaVA-Phi-2-SigLIP-3.1B, achieves better overall performance against existing 7B models such as LLaVA-1.5 and Qwen-VL.
|
9 |
|
10 |
Here, we introduce TinyLLaVA-Phi-2-SigLIP-3.1B, which is trained by the TinyLLaVA Factory codebase. For LLM and vision tower, we choose [Phi-2](microsoft/phi-2) and [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384), respectively. The dataset used for training this model is the [ShareGPT4V](https://github.com/InternLM/InternLM-XComposer/blob/main/projects/ShareGPT4V/docs/Data.md) dataset.
|
|
|
3 |
pipeline_tag: image-text-to-text
|
4 |
---
|
5 |
|
6 |
+
**<center><span style="font-size:2em;">TinyLLaVA</span></center>**
|
7 |
+
|
8 |
+
[![arXiv](https://img.shields.io/badge/Arxiv-2402.14289-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2402.14289)[![Github](https://img.shields.io/badge/Github-Github-blue.svg)](https://github.com/TinyLLaVA/TinyLLaVA_Factory)[![Demo](https://img.shields.io/badge/Demo-Demo-red.svg)](http://8843843nmph5.vicp.fun/#/)
|
9 |
TinyLLaVA has released a family of small-scale Large Multimodel Models(LMMs), ranging from 1.4B to 3.1B. Our best model, TinyLLaVA-Phi-2-SigLIP-3.1B, achieves better overall performance against existing 7B models such as LLaVA-1.5 and Qwen-VL.
|
10 |
|
11 |
Here, we introduce TinyLLaVA-Phi-2-SigLIP-3.1B, which is trained by the TinyLLaVA Factory codebase. For LLM and vision tower, we choose [Phi-2](microsoft/phi-2) and [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384), respectively. The dataset used for training this model is the [ShareGPT4V](https://github.com/InternLM/InternLM-XComposer/blob/main/projects/ShareGPT4V/docs/Data.md) dataset.
|