DaozeZhang
commited on
Commit
•
852fd11
1
Parent(s):
7c1a677
update
Browse files- 1.txt +0 -1
- README (1).md +0 -86
1.txt
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
test
|
|
|
|
README (1).md
DELETED
@@ -1,86 +0,0 @@
|
|
1 |
-
# LLaVA-LLaMA-3.1 Model Card
|
2 |
-
|
3 |
-
## Model Details
|
4 |
-
|
5 |
-
LLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. Now we release a LLaVA model with `meta-llama/Meta-Llama-3.1-8B-Instruct` as the language model.
|
6 |
-
|
7 |
-
### License Notices
|
8 |
-
|
9 |
-
This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses, including but not limited to the OpenAI Terms of Use for the dataset and the specific licenses for base language models for checkpoints trained using the dataset (e.g. Llama-1/2 community license for LLaMA-2 and Vicuna-v1.5, [Tongyi Qianwen LICENSE AGREEMENT](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) and [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/)). This project does not impose any additional constraints beyond those stipulated in the original licenses. Furthermore, users are reminded to ensure that their use of the dataset and checkpoints is in compliance with all applicable laws and regulations.
|
10 |
-
|
11 |
-
## Training Details
|
12 |
-
|
13 |
-
### Training Procedure
|
14 |
-
|
15 |
-
Before our training, the weights of the vision encoder are initialized from pretrained checkpoints of LLaVA-1.6; and the weights of LLM are LLaMA-3.1-8B-instruct model.
|
16 |
-
|
17 |
-
### Training Data
|
18 |
-
|
19 |
-
- Stage0: We train the parameters of the projector on the 3K image-text pairs from llava-pretrain.
|
20 |
-
- Stage1: We train the parameters of the projector and LLM on the 558K data from llava-pretrain (after filtering the incomplete data items, 270K are used).
|
21 |
-
- Stage2: We conduct the instruction-tuning for the parameters of the projector and LLM on llava-instruct-150K dtaa.
|
22 |
-
|
23 |
-
### Training Details
|
24 |
-
|
25 |
-
The training cost is ~70 hours on 8 NVIDIA A100-80GB (may vary due to hardware differences).
|
26 |
-
All the training and evaluation were conduected using the [MS-SWIFT](https://github.com/modelscope/ms-swift) framework.
|
27 |
-
|
28 |
-
Using the [MS-SWIFT](https://github.com/modelscope/ms-swift) framework, the scripts of all the training stages are shown as follows:
|
29 |
-
|
30 |
-
- Stage1: Training the projector on 3K llava-pretrain data:
|
31 |
-
```
|
32 |
-
DATASET_ENABLE_CACHE=1 swift sft \
|
33 |
-
--model_type llava1_6-llama3_1-8b-instruct \
|
34 |
-
--dataset llava-pretrain#3000 \
|
35 |
-
--batch_size 1 \
|
36 |
-
--gradient_accumulation_steps 16 \
|
37 |
-
--warmup_ratio 0.03 \
|
38 |
-
--learning_rate 1e-5 \
|
39 |
-
--sft_type full \
|
40 |
-
--freeze_parameters_ratio 1 \
|
41 |
-
--additional_trainable_parameters multi_modal_projector
|
42 |
-
```
|
43 |
-
|
44 |
-
- Stage2: Training the projector and language model on llava-pretrain dataset:
|
45 |
-
```
|
46 |
-
DATASET_ENABLE_CACHE=1 NPROC_PER_NODE=8 swift sft \
|
47 |
-
--model_type llava1_6-llama3_1-8b-instruct \
|
48 |
-
--resume_from_checkpoint <the_ckpt_in_stage1> \
|
49 |
-
--resume_only_model true \
|
50 |
-
--dataset llava-pretrain \
|
51 |
-
--batch_size 2 \
|
52 |
-
--gradient_accumulation_steps 16 \
|
53 |
-
--warmup_ratio 0.03 \
|
54 |
-
--learning_rate 2e-5 \
|
55 |
-
--deepspeed default-zero3 \
|
56 |
-
--sft_type full \
|
57 |
-
--freeze_parameters_ratio 0
|
58 |
-
```
|
59 |
-
|
60 |
-
- Stage3: After the stage 1 and 2, our model has had the ability to understand and describe images, but still need instruction-tuning to obtain the instruction-following ability.
|
61 |
-
```
|
62 |
-
DATASET_ENABLE_CACHE=1 NPROC_PER_NODE=8 swift sft \
|
63 |
-
--model_type llava1_6-llama3_1-8b-instruct \
|
64 |
-
--resume_from_checkpoint <the_ckpt_in_stage2> \
|
65 |
-
--resume_only_model true \
|
66 |
-
--dataset llava-instruct-150k \
|
67 |
-
--num_train_epochs 2 \
|
68 |
-
--batch_size 2 \
|
69 |
-
--gradient_accumulation_steps 16 \
|
70 |
-
--warmup_ratio 0.03 \
|
71 |
-
--learning_rate 1e-5 \
|
72 |
-
--deepspeed zero3-offload \
|
73 |
-
--sft_type full \
|
74 |
-
--freeze_parameters_ratio 0
|
75 |
-
```
|
76 |
-
|
77 |
-
|
78 |
-
### Evaluation
|
79 |
-
|
80 |
-
We evaluate our model on the TextVQA benchmark. This is also conducted using [MS-SWIFT](https://github.com/modelscope/ms-swift). It achieves 60.482 on overall accuracy, which is basically comparable with other llava models.
|
81 |
-
|
82 |
-
```
|
83 |
-
CUDA_VISIBLE_DEVICES=0 swift eval \
|
84 |
-
--ckpt_dir <the_ckpt_in_stage3> \
|
85 |
-
--eval_dataset TextVQA_VAL
|
86 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|