File size: 2,537 Bytes
256a159
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
from opencompass.multimodal.models.openflamingo import OpenFlamingoCaptionPromptConstructor

# dataloader settings
val_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(type='mmpretrain.ResizeEdge',
         scale=224,
         interpolation='bicubic',
         backend='pillow'),
    dict(type='CenterCrop', crop_size=(224, 224)),
    dict(type='mmpretrain.PackInputs', algorithm_keys=['image_id'])
]

dataset = dict(type='mmpretrain.COCOCaption',
               data_root='data/coco',
               data_prefix=dict(img_path='images'),
               ann_file='annotations/coco_karpathy_val.json',
               pipeline=val_pipeline)

openflamingo_coco_caption_dataloader = dict(
    batch_size=1,
    num_workers=4,
    dataset=dataset,
    sampler=dict(type='DefaultSampler', shuffle=False),
    collate_fn=dict(type='default_collate'),
    persistent_workers=True,
)

# model settings
openflamingo_coco_caption_model = dict(
    type='openflamingo',
    data_preprocessor=dict(
        type='mmpretrain.MultiModalDataPreprocessor',
        mean=[122.770938, 116.7460125, 104.09373615],
        std=[68.5005327, 66.6321579, 70.32316305],
        to_rgb=True,
    ),
    tokenizer=dict(type='mmpretrain.LlamaTokenizer',
                   name_or_path='decapoda-research/llama-7b-hf'),
    vision_encoder=dict(
        type='mmpretrain.VisionTransformer',
        arch='l',
        patch_size=14,
        pre_norm=True,
        norm_cfg=dict(type='LN', eps=1e-5),
        layer_cfgs=dict(act_cfg=dict(type='mmpretrain.QuickGELU')),
        final_norm=False,
        out_type='raw',
        pretrained=  # noqa: E251
        '/path/to/vision/encoder',  # noqa
    ),
    lang_encoder=dict(
        base=dict(type='mmpretrain.AutoModelForCausalLM',
                  name_or_path=
                  'decapoda-research/llama-7b-hf',
                  local_files_only=True),
        adapter=dict(type='mmpretrain.FlamingoLMAdapter',
                     vis_hidden_size=1024,
                     cross_attn_every_n_layers=4,
                     use_media_placement_augmentation=False),
    ),
    task='caption',
    generation_cfg=dict(num_beams=3, max_new_tokens=20, length_penalty=-2.0),
    prompt_constructor=dict(type=OpenFlamingoCaptionPromptConstructor)
)

# evaluation settings
openflamingo_coco_caption_evaluator = [
    dict(
        type='mmpretrain.COCOCaption',
        ann_file='data/coco/annotations/coco_karpathy_val_gt.json',
    )  # noqa
]

openflamingo_load_from = '/path/to/pretrained/weights'  # noqa