gitlost-murali commited on
Commit
61ea964
1 Parent(s): 35ce717

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +196 -0
README.md ADDED
@@ -0,0 +1,196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - fr
6
+ - ro
7
+ - de
8
+ pipeline_tag: image-to-text
9
+ tags:
10
+ - refexp
11
+ - UI
12
+ ---
13
+
14
+ # Since Google did not convert and upload the files, I converted their checkpoints in gcloud to HF format.
15
+
16
+
17
+ # Model card for Pix2Struct - Finetuned on TextCaps
18
+
19
+ ![model_image](https://s3.amazonaws.com/moonup/production/uploads/1678713353867-62441d1d9fdefb55a0b7d12c.png)
20
+
21
+ # Table of Contents
22
+
23
+ 0. [TL;DR](#TL;DR)
24
+ 1. [Using the model](#using-the-model)
25
+ 2. [Contribution](#contribution)
26
+ 3. [Citation](#citation)
27
+
28
+ # TL;DR
29
+
30
+ Pix2Struct is an image encoder - text decoder model that is trained on image-text pairs for various tasks, including image captionning and visual question answering. The full list of available models can be found on the Table 1 of the paper:
31
+
32
+ ![Table 1 - paper](https://s3.amazonaws.com/moonup/production/uploads/1678712985040-62441d1d9fdefb55a0b7d12c.png)
33
+
34
+
35
+ The abstract of the model states that:
36
+ > Visually-situated language is ubiquitous—sources range from textbooks with diagrams to web pages with images and tables, to mobile apps with buttons and
37
+ forms. Perhaps due to this diversity, previous work has typically relied on domainspecific recipes with limited sharing of the underlying data, model architectures,
38
+ and objectives. We present Pix2Struct, a pretrained image-to-text model for
39
+ purely visual language understanding, which can be finetuned on tasks containing visually-situated language. Pix2Struct is pretrained by learning to parse
40
+ masked screenshots of web pages into simplified HTML. The web, with its richness of visual elements cleanly reflected in the HTML structure, provides a large
41
+ source of pretraining data well suited to the diversity of downstream tasks. Intuitively, this objective subsumes common pretraining signals such as OCR, language modeling, image captioning. In addition to the novel pretraining strategy,
42
+ we introduce a variable-resolution input representation and a more flexible integration of language and vision inputs, where language prompts such as questions
43
+ are rendered directly on top of the input image. For the first time, we show that a
44
+ single pretrained model can achieve state-of-the-art results in six out of nine tasks
45
+ across four domains: documents, illustrations, user interfaces, and natural images.
46
+
47
+ # Using the model
48
+
49
+ ## Converting from T5x to huggingface
50
+
51
+ You can use the [`convert_pix2struct_checkpoint_to_pytorch.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/pix2struct/convert_pix2struct_checkpoint_to_pytorch.py) script as follows:
52
+ ```bash
53
+ python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE
54
+ ```
55
+ if you are converting a large model, run:
56
+ ```bash
57
+ python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --use-large
58
+ ```
59
+ Once saved, you can push your converted model with the following snippet:
60
+ ```python
61
+ from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
62
+
63
+ model = Pix2StructForConditionalGeneration.from_pretrained(PATH_TO_SAVE)
64
+ processor = Pix2StructProcessor.from_pretrained(PATH_TO_SAVE)
65
+
66
+ model.push_to_hub("USERNAME/MODEL_NAME")
67
+ processor.push_to_hub("USERNAME/MODEL_NAME")
68
+ ```
69
+
70
+ ## Running the model
71
+
72
+ ### In full precision, on CPU:
73
+
74
+ You can run the model in full precision on CPU:
75
+ ```python
76
+ import requests
77
+ from PIL import Image
78
+ from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
79
+
80
+ url = "https://www.ilankelman.org/stopsigns/australia.jpg"
81
+ image = Image.open(requests.get(url, stream=True).raw)
82
+
83
+ model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-textcaps-base")
84
+ processor = Pix2StructProcessor.from_pretrained("google/pix2struct-textcaps-base")
85
+
86
+ # image only
87
+ inputs = processor(images=image, return_tensors="pt")
88
+
89
+ predictions = model.generate(**inputs)
90
+ print(processor.decode(predictions[0], skip_special_tokens=True))
91
+ >>> A stop sign is on a street corner.
92
+ ```
93
+
94
+ ### In full precision, on GPU:
95
+
96
+ You can run the model in full precision on CPU:
97
+ ```python
98
+ import requests
99
+ from PIL import Image
100
+ from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
101
+
102
+ url = "https://www.ilankelman.org/stopsigns/australia.jpg"
103
+ image = Image.open(requests.get(url, stream=True).raw)
104
+
105
+ model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-textcaps-base").to("cuda")
106
+ processor = Pix2StructProcessor.from_pretrained("google/pix2struct-textcaps-base")
107
+
108
+ # image only
109
+ inputs = processor(images=image, return_tensors="pt").to("cuda")
110
+
111
+ predictions = model.generate(**inputs)
112
+ print(processor.decode(predictions[0], skip_special_tokens=True))
113
+ >>> A stop sign is on a street corner.
114
+ ```
115
+
116
+ ### In half precision, on GPU:
117
+
118
+ You can run the model in full precision on CPU:
119
+ ```python
120
+ import requests
121
+ import torch
122
+
123
+ from PIL import Image
124
+ from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
125
+
126
+ url = "https://www.ilankelman.org/stopsigns/australia.jpg"
127
+ image = Image.open(requests.get(url, stream=True).raw)
128
+
129
+ model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-textcaps-base", torch_dtype=torch.bfloat16).to("cuda")
130
+ processor = Pix2StructProcessor.from_pretrained("google/pix2struct-textcaps-base")
131
+
132
+ # image only
133
+ inputs = processor(images=image, return_tensors="pt").to("cuda", torch.bfloat16)
134
+
135
+ predictions = model.generate(**inputs)
136
+ print(processor.decode(predictions[0], skip_special_tokens=True))
137
+ >>> A stop sign is on a street corner.
138
+ ```
139
+
140
+ ### Use different sequence length
141
+
142
+ This model has been trained on a sequence length of `2048`. You can try to reduce the sequence length for a more memory efficient inference but you may observe some performance degradation for small sequence length (<512). Just pass `max_patches` when calling the processor:
143
+ ```python
144
+ inputs = processor(images=image, return_tensors="pt", max_patches=512)
145
+ ```
146
+
147
+ ### Conditional generation
148
+
149
+ You can also pre-pend some input text to perform conditional generation:
150
+
151
+ ```python
152
+ import requests
153
+ from PIL import Image
154
+ from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
155
+
156
+ url = "https://www.ilankelman.org/stopsigns/australia.jpg"
157
+ image = Image.open(requests.get(url, stream=True).raw)
158
+ text = "A picture of"
159
+
160
+ model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-textcaps-base")
161
+ processor = Pix2StructProcessor.from_pretrained("google/pix2struct-textcaps-base")
162
+
163
+ # image only
164
+ inputs = processor(images=image, text=text, return_tensors="pt")
165
+
166
+ predictions = model.generate(**inputs)
167
+ print(processor.decode(predictions[0], skip_special_tokens=True))
168
+ >>> A picture of a stop sign that says yes.
169
+ ```
170
+
171
+ # Contribution
172
+
173
+ This model was originally contributed by Kenton Lee, Mandar Joshi et al. and added to the Hugging Face ecosystem by [Murali Manohar](https://huggingface.co/gitlost-murali).
174
+
175
+ # Citation
176
+
177
+ If you want to cite this work, please consider citing the original paper:
178
+ ```
179
+ @misc{https://doi.org/10.48550/arxiv.2210.03347,
180
+ doi = {10.48550/ARXIV.2210.03347},
181
+
182
+ url = {https://arxiv.org/abs/2210.03347},
183
+
184
+ author = {Lee, Kenton and Joshi, Mandar and Turc, Iulia and Hu, Hexiang and Liu, Fangyu and Eisenschlos, Julian and Khandelwal, Urvashi and Shaw, Peter and Chang, Ming-Wei and Toutanova, Kristina},
185
+
186
+ keywords = {Computation and Language (cs.CL), Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
187
+
188
+ title = {Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding},
189
+
190
+ publisher = {arXiv},
191
+
192
+ year = {2022},
193
+
194
+ copyright = {Creative Commons Attribution 4.0 International}
195
+ }
196
+ ```