English
Irena Gao commited on
Commit
dfb2f2c
1 Parent(s): 0a7a564

update README

Browse files
Files changed (1) hide show
  1. README.md +81 -16
README.md CHANGED
@@ -4,19 +4,92 @@ datasets:
4
  - laion2b
5
  ---
6
 
7
- # OpenFlamingo-3B (CLIP ViT-L/14, MPT-1B Instruct)
8
 
9
  [Blog post]() | [Code](https://github.com/mlfoundations/open_flamingo) | [Demo]()
10
 
11
  OpenFlamingo is an open source implementation of DeepMind's [Flamingo](https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model) models.
12
- This 3B-parameter model uses a [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14) vision encoder and an instruction-tuned [MPT-1B](https://huggingface.co/mosaicml/mpt-1b-redpajama-200b-dolly) language model.
13
 
14
  ## Model Details
15
- We follow the Flamingo modeling paradigm, outfitting the layers of a pretrained, frozen language model such that they cross-attend to visual features when decoding. Following Flamingo, we freeze the vision encoder and language model but train the connecting modules on web-scraped image-text sequences. Specifically, we use a mixture of [LAION-2B](https://arxiv.org/abs/2210.08402) and [Multimodal C4](https://arxiv.org/abs/2304.06939).
 
 
 
 
16
 
17
  ## Uses
18
  OpenFlamingo models process arbitrarily interleaved sequences of images and text to output text. This allows the models to accept in-context examples and undertake tasks like captioning, visual question answering, and image classification.
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ### Bias, Risks, and Limitations
21
  OpenFlamingo models inherit the risks of their parent models, especially the language model. As an open-source research effort, we highly value open, accessible, reproducible multimodal model research; however, it is crucial to be aware that these models are trained on web data, have not been finetuned for safety, and thus may produce unintended, inappropriate, unreliable, and/or inaccurate outputs. Please use caution before deploying OpenFlamingo models in real applications. We also hope that OpenFlamingo enables further safety and reliability research to address these issues.
22
 
@@ -42,11 +115,11 @@ In an effort to mitigate current potential biases and harms, we have deployed a
42
  </tr>
43
  <tr>
44
  <th>VQAv2 (Accuracy)</th>
45
- <td>43.6 (0.2)</td>
46
- <td>45.7 (0.5)</td>
47
- <td>46.1 (0.5)</td>
48
- <td>45.3 (0.4)</td>
49
- <td>44.3 (0.4)</td>
50
  </tr>
51
  <tr>
52
  <th>Flickr-30K (CIDEr)</th>
@@ -80,14 +153,6 @@ In an effort to mitigate current potential biases and harms, we have deployed a
80
  <td>35.5 (0.8)</td>
81
  <td>41.3 (0.5)</td>
82
  </tr>
83
- <tr>
84
- <th>ImageNet (Top-1 Accuracy)</th>
85
- <td>-</td>
86
- <td>-</td>
87
- <td>-</td>
88
- <td>-</td>
89
- <td>-</td>
90
- </tr>
91
  <tr>
92
  <th>Hateful Memes (ROC AUC)</th>
93
  <td>-</td>
 
4
  - laion2b
5
  ---
6
 
7
+ # OpenFlamingo-3B (CLIP ViT-L/14, MPT-1B-Dolly)
8
 
9
  [Blog post]() | [Code](https://github.com/mlfoundations/open_flamingo) | [Demo]()
10
 
11
  OpenFlamingo is an open source implementation of DeepMind's [Flamingo](https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model) models.
12
+ This 3B-parameter model uses a [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14) vision encoder and an instruction-tuned [MPT-1B](https://huggingface.co/mosaicml/mpt-1b-redpajama-200b-dolly) language model.
13
 
14
  ## Model Details
15
+ We follow the Flamingo modeling paradigm, outfitting the layers of a pretrained, frozen language model such that they cross-attend to visual features when decoding. Following Flamingo, we freeze the vision encoder and language model but train the connecting modules on web-scraped image-text sequences. Specifically, we trained this model on a mixture of [LAION-2B](https://arxiv.org/abs/2210.08402) and [Multimodal C4](https://arxiv.org/abs/2304.06939).
16
+
17
+ This model has cross-attention modules inserted in *every* decoder block. It was trained using DistributedDataParallel across 64 A100 40GB GPUs at FP32 precision.
18
+
19
+ The [MPT-1B](https://huggingface.co/mosaicml/mpt-1b-redpajama-200b-dolly) modeling code does not accept the `labels` kwarg and compute cross-entropy loss within `forward()`. To train with the OpenFlamingo codebase, we suggest using a version with the `labels` kwarg [here](https://huggingface.co/anas-awadalla/mpt-1b-redpajama-200b-dolly).
20
 
21
  ## Uses
22
  OpenFlamingo models process arbitrarily interleaved sequences of images and text to output text. This allows the models to accept in-context examples and undertake tasks like captioning, visual question answering, and image classification.
23
 
24
+ ### Generation example
25
+ Below is an example of generating text conditioned on interleaved images/text. In particular, let's try few-shot image captioning.
26
+
27
+ ``` python
28
+ from PIL import Image
29
+ import requests
30
+
31
+ """
32
+ Step 1: Load images
33
+ """
34
+ demo_image_one = Image.open(
35
+ requests.get(
36
+ "http://images.cocodataset.org/val2017/000000039769.jpg", stream=True
37
+ ).raw
38
+ )
39
+
40
+ demo_image_two = Image.open(
41
+ requests.get(
42
+ "http://images.cocodataset.org/test-stuff2017/000000028137.jpg",
43
+ stream=True
44
+ ).raw
45
+ )
46
+
47
+ query_image = Image.open(
48
+ requests.get(
49
+ "http://images.cocodataset.org/test-stuff2017/000000028352.jpg",
50
+ stream=True
51
+ ).raw
52
+ )
53
+
54
+
55
+ """
56
+ Step 2: Preprocessing images
57
+ Details: For OpenFlamingo, we expect the image to be a torch tensor of shape
58
+ batch_size x num_media x num_frames x channels x height x width.
59
+ In this case batch_size = 1, num_media = 3, num_frames = 1,
60
+ channels = 3, height = 224, width = 224.
61
+ """
62
+ vision_x = [image_processor(demo_image_one).unsqueeze(0), image_processor(demo_image_two).unsqueeze(0), image_processor(query_image).unsqueeze(0)]
63
+ vision_x = torch.cat(vision_x, dim=0)
64
+ vision_x = vision_x.unsqueeze(1).unsqueeze(0)
65
+
66
+ """
67
+ Step 3: Preprocessing text
68
+ Details: In the text we expect an <image> special token to indicate where an image is.
69
+ We also expect an <|endofchunk|> special token to indicate the end of the text
70
+ portion associated with an image.
71
+ """
72
+ tokenizer.padding_side = "left" # For generation padding tokens should be on the left
73
+ lang_x = tokenizer(
74
+ ["<image>An image of two cats.<|endofchunk|><image>An image of a bathroom sink.<|endofchunk|><image>An image of"],
75
+ return_tensors="pt",
76
+ )
77
+
78
+
79
+ """
80
+ Step 4: Generate text
81
+ """
82
+ generated_text = model.generate(
83
+ vision_x=vision_x,
84
+ lang_x=lang_x["input_ids"],
85
+ attention_mask=lang_x["attention_mask"],
86
+ max_new_tokens=20,
87
+ num_beams=3,
88
+ )
89
+
90
+ print("Generated text: ", tokenizer.decode(generated_text[0]))
91
+ ```
92
+
93
  ### Bias, Risks, and Limitations
94
  OpenFlamingo models inherit the risks of their parent models, especially the language model. As an open-source research effort, we highly value open, accessible, reproducible multimodal model research; however, it is crucial to be aware that these models are trained on web data, have not been finetuned for safety, and thus may produce unintended, inappropriate, unreliable, and/or inaccurate outputs. Please use caution before deploying OpenFlamingo models in real applications. We also hope that OpenFlamingo enables further safety and reliability research to address these issues.
95
 
 
115
  </tr>
116
  <tr>
117
  <th>VQAv2 (Accuracy)</th>
118
+ <td>-</td>
119
+ <td>-</td>
120
+ <td>-</td>
121
+ <td>-</td>
122
+ <td>-</td>
123
  </tr>
124
  <tr>
125
  <th>Flickr-30K (CIDEr)</th>
 
153
  <td>35.5 (0.8)</td>
154
  <td>41.3 (0.5)</td>
155
  </tr>
 
 
 
 
 
 
 
 
156
  <tr>
157
  <th>Hateful Memes (ROC AUC)</th>
158
  <td>-</td>