RichardErkhov
commited on
Commit
•
f49a62c
1
Parent(s):
b713620
uploaded readme
Browse files
README.md
ADDED
@@ -0,0 +1,147 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Quantization made by Richard Erkhov.
|
2 |
+
|
3 |
+
[Github](https://github.com/RichardErkhov)
|
4 |
+
|
5 |
+
[Discord](https://discord.gg/pvy7H8DZMG)
|
6 |
+
|
7 |
+
[Request more models](https://github.com/RichardErkhov/quant_request)
|
8 |
+
|
9 |
+
|
10 |
+
fuyu-8b - bnb 8bits
|
11 |
+
- Model creator: https://huggingface.co/adept/
|
12 |
+
- Original model: https://huggingface.co/adept/fuyu-8b/
|
13 |
+
|
14 |
+
|
15 |
+
|
16 |
+
|
17 |
+
Original model description:
|
18 |
+
---
|
19 |
+
license: cc-by-nc-4.0
|
20 |
+
---
|
21 |
+
# Fuyu-8B Model Card
|
22 |
+
|
23 |
+
We’re releasing Fuyu-8B, a small version of the multimodal model that powers our product. The model is available on HuggingFace. We think Fuyu-8B is exciting because:
|
24 |
+
1. It has a much simpler architecture and training procedure than other multi-modal models, which makes it easier to understand, scale, and deploy.
|
25 |
+
2. It’s designed from the ground up for digital agents, so it can support arbitrary image resolutions, answer questions about graphs and diagrams, answer UI-based questions, and do fine-grained localization on screen images.
|
26 |
+
3. It’s fast - we can get responses for large images in less than 100 milliseconds.
|
27 |
+
4. Despite being optimized for our use-case, it performs well at standard image understanding benchmarks such as visual question-answering and natural-image-captioning.
|
28 |
+
|
29 |
+
Please note that **the model we have released is a base model. We expect you to need to finetune the model for specific use cases like verbose captioning or multimodal chat.** In our experience, the model responds well to few-shotting and fine-tuning for a variety of use-cases.
|
30 |
+
|
31 |
+
## Model
|
32 |
+
|
33 |
+
[Fuyu-8B](https://www.adept.ai/blog/fuyu-8b) is a multi-modal text and image transformer trained by [Adept AI](https://www.adept.ai/).
|
34 |
+
|
35 |
+
Architecturally, Fuyu is a vanilla decoder-only transformer - there is no image encoder.
|
36 |
+
Image patches are instead linearly projected into the first layer of the transformer, bypassing the embedding lookup.
|
37 |
+
We simply treat the transformer decoder like an image transformer (albeit with no pooling and causal attention).
|
38 |
+
See the below diagram for more details.
|
39 |
+
|
40 |
+
![architecture](architecture.png)
|
41 |
+
|
42 |
+
This simplification allows us to support arbitrary image resolutions.
|
43 |
+
To accomplish this, we treat the sequence of image tokens like the sequence of text tokens.
|
44 |
+
We remove image-specific position embeddings and feed in as many image tokens as necessary in raster-scan order.
|
45 |
+
To tell the model when a line has broken, we simply use a special image-newline character.
|
46 |
+
The model can use its existing position embeddings to reason about different image sizes, and we can use images of arbitrary size at training time, removing the need for separate high and low-resolution training stages.
|
47 |
+
|
48 |
+
### Model Description
|
49 |
+
|
50 |
+
- **Developed by:** Adept-AI
|
51 |
+
- **Model type:** Decoder-only multi-modal transformer model
|
52 |
+
- **License:** [CC-BY-NC](https://creativecommons.org/licenses/by-nc/4.0/deed.en)
|
53 |
+
- **Model Description:** This is a multi-modal model that can consume images and text and produce text.
|
54 |
+
- **Resources for more information:** Check out our [blog post](https://www.adept.ai/blog/fuyu-8b).
|
55 |
+
|
56 |
+
## Evaluation
|
57 |
+
Though not the focus of this model, we did evaluate it on standard image understanding benchmarks:
|
58 |
+
|
59 |
+
| Eval Task | Fuyu-8B | Fuyu-Medium | LLaVA 1.5 (13.5B) | QWEN-VL (10B) | PALI-X (55B) | PALM-e-12B | PALM-e-562B |
|
60 |
+
| ------------------- | ------- | ----------------- | ----------------- | ------------- | ------------ | ---------- | ----------- |
|
61 |
+
| VQAv2 | 74.2 | 77.4 | 80 | 79.5 | 86.1 | 76.2 | 80.0 |
|
62 |
+
| OKVQA | 60.6 | 63.1 | n/a | 58.6 | 66.1 | 55.5 | 66.1 |
|
63 |
+
| COCO Captions | 141 | 138 | n/a | n/a | 149 | 135 | 138 |
|
64 |
+
| AI2D | 64.5 | 73.7 | n/a | 62.3 | 81.2 | n/a | n/a |
|
65 |
+
|
66 |
+
## How to Use
|
67 |
+
|
68 |
+
You can load the model and perform inference as follows:
|
69 |
+
```python
|
70 |
+
from transformers import FuyuProcessor, FuyuForCausalLM
|
71 |
+
from PIL import Image
|
72 |
+
import requests
|
73 |
+
|
74 |
+
# load model and processor
|
75 |
+
model_id = "adept/fuyu-8b"
|
76 |
+
processor = FuyuProcessor.from_pretrained(model_id)
|
77 |
+
model = FuyuForCausalLM.from_pretrained(model_id, device_map="cuda:0")
|
78 |
+
|
79 |
+
# prepare inputs for the model
|
80 |
+
text_prompt = "Generate a coco-style caption.\n"
|
81 |
+
url = "https://huggingface.co/adept/fuyu-8b/resolve/main/bus.png"
|
82 |
+
image = Image.open(requests.get(url, stream=True).raw)
|
83 |
+
|
84 |
+
inputs = processor(text=text_prompt, images=image, return_tensors="pt").to("cuda:0")
|
85 |
+
|
86 |
+
# autoregressively generate text
|
87 |
+
generation_output = model.generate(**inputs, max_new_tokens=7)
|
88 |
+
generation_text = processor.batch_decode(generation_output[:, -7:], skip_special_tokens=True)
|
89 |
+
assert generation_text == ['A blue bus parked on the side of a road.']
|
90 |
+
```
|
91 |
+
|
92 |
+
N.B.: The token `|SPEAKER|` is a placeholder token for image patch embeddings, so it will show up in the model context (e.g., in the portion of `generation_output` representing the model context).
|
93 |
+
`|NEWLINE|` is the "image newline" token, denoting new rows in the raster scan order input of the image patches.
|
94 |
+
`\x04` is the "beginning of answer" token.
|
95 |
+
|
96 |
+
Fuyu can also perform some question answering on natural images and charts/diagrams (thought fine-tuning may be required for good performance):
|
97 |
+
```python
|
98 |
+
text_prompt = "What color is the bus?\n"
|
99 |
+
url = "https://huggingface.co/adept/fuyu-8b/resolve/main/bus.png"
|
100 |
+
image = Image.open(requests.get(url, stream=True).raw)
|
101 |
+
|
102 |
+
inputs = processor(text=text_prompt, images=image, return_tensors="pt").to("cuda:0")
|
103 |
+
|
104 |
+
generation_output = model.generate(**inputs, max_new_tokens=6)
|
105 |
+
generation_text = processor.batch_decode(generation_output[:, -6:], skip_special_tokens=True)
|
106 |
+
assert generation_text == ["The bus is blue.\n"]
|
107 |
+
|
108 |
+
|
109 |
+
text_prompt = "What is the highest life expectancy at birth of male?\n"
|
110 |
+
url = "https://huggingface.co/adept/fuyu-8b/resolve/main/chart.png"
|
111 |
+
image = Image.open(requests.get(url, stream=True).raw)
|
112 |
+
|
113 |
+
model_inputs = processor(text=text_prompt, images=image, return_tensors="pt").to("cuda:0")
|
114 |
+
|
115 |
+
generation_output = model.generate(**model_inputs, max_new_tokens=16)
|
116 |
+
generation_text = processor.batch_decode(generation_output[:, -16:], skip_special_tokens=True)
|
117 |
+
assert generation_text == ["The life expectancy at birth of males in 2018 is 80.7.\n"]
|
118 |
+
```
|
119 |
+
For best performance, it's recommended to end questions with `\n`, as shown above!
|
120 |
+
|
121 |
+
## Uses
|
122 |
+
|
123 |
+
### Direct Use
|
124 |
+
|
125 |
+
The model is intended for research purposes only.
|
126 |
+
**Because this is a raw model release, we have not added further finetuning, postprocessing or sampling strategies to control for undesirable outputs. You should expect to have to fine-tune the model for your use-case.**
|
127 |
+
|
128 |
+
Possible research areas and tasks include
|
129 |
+
|
130 |
+
- Applications in computer control or digital agents.
|
131 |
+
- Research on multi-modal models generally.
|
132 |
+
|
133 |
+
Excluded uses are described below.
|
134 |
+
|
135 |
+
### Out-of-Scope Use
|
136 |
+
|
137 |
+
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
|
138 |
+
|
139 |
+
## Limitations and Bias
|
140 |
+
|
141 |
+
### Limitations
|
142 |
+
|
143 |
+
- Faces and people in general may not be generated properly.
|
144 |
+
|
145 |
+
### Bias
|
146 |
+
While the capabilities of these models are impressive, they can also reinforce or exacerbate social biases.
|
147 |
+
|