Text Generation
Transformers
Safetensors
imp_qwen2
conversational
custom_code
Oyoy1235 commited on
Commit
dfb7112
1 Parent(s): c71d1b5

update readme

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -11,7 +11,7 @@ datasets:
11
 
12
  ## Introduction
13
 
14
- The Imp project aims to provide a family of highly capable yet lightweight LMMs. Our `Imp-v1.5-2B-Qwen1.5` is a strong MSLM with only **2B** parameters, which is build upon a small yet powerful SLM [Qwen1.5-1.8B-Chat ](https://huggingface.co/Qwen/Qwen1.5-1.8B-Chat)(1.8B) and a powerful visual encoder [SigLIP ](https://huggingface.co/google/siglip-so400m-patch14-384)(0.4B), and trained on on 1M mixed dataset.
15
 
16
  As shown in the Table below, `Imp-v1.5-2B-Qwen1.5` significantly outperforms the counterparts of similar model sizes.
17
 
@@ -60,7 +60,7 @@ print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True
60
  ```
61
 
62
  ## Model evaluation
63
- We conduct evaluation on 9 commonly-used benchmarks, including 5 academic VQA benchmarks and 4 popular MLLM benchmarks, to compare our Imp model with LLaVA (7B) and existing MSLMs of similar model sizes.
64
 
65
  | Models | Size | VQAv2 | GQA | SQA(IMG) | TextVQA | POPE | MME(P) | MMB |MMBCN |MM-Vet|
66
  |:--------:|:-----:|:----:|:-------------:|:--------:|:-----:|:----:|:-------:|:-------:|:-------:|:-------:|
 
11
 
12
  ## Introduction
13
 
14
+ The Imp project aims to provide a family of highly capable yet lightweight LMMs. Our `Imp-v1.5-2B-Qwen1.5` is a strong lightweight LMM with only **2B** parameters, which is build upon [Qwen1.5-1.8B-Chat ](https://huggingface.co/Qwen/Qwen1.5-1.8B-Chat)(1.8B) and a powerful visual encoder [SigLIP ](https://huggingface.co/google/siglip-so400m-patch14-384)(0.4B), and trained on on 1M mixed dataset.
15
 
16
  As shown in the Table below, `Imp-v1.5-2B-Qwen1.5` significantly outperforms the counterparts of similar model sizes.
17
 
 
60
  ```
61
 
62
  ## Model evaluation
63
+ We conduct evaluation on 9 commonly-used benchmarks, including 5 academic VQA benchmarks and 4 popular MLLM benchmarks, to compare our Imp model with LLaVA (7B) and existing lightweight LMMs of similar model sizes.
64
 
65
  | Models | Size | VQAv2 | GQA | SQA(IMG) | TextVQA | POPE | MME(P) | MMB |MMBCN |MM-Vet|
66
  |:--------:|:-----:|:----:|:-------------:|:--------:|:-----:|:----:|:-------:|:-------:|:-------:|:-------:|