Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,141 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
# Model Card for Model ID
|
7 |
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
## Model Details
|
13 |
|
14 |
### Model Description
|
15 |
|
16 |
-
|
17 |
-
|
18 |
-
This is
|
19 |
-
|
20 |
-
-
|
21 |
-
|
22 |
-
|
23 |
-
- **
|
24 |
-
- **
|
25 |
-
- **
|
26 |
-
- **
|
27 |
-
|
28 |
-
###
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
[
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
[
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
[
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
license: mit
|
4 |
+
datasets:
|
5 |
+
- biglab/jitteredwebsites-merged-224-paraphrased
|
6 |
+
- biglab/jitteredwebsites-merged-224-paraphrased-paired
|
7 |
+
- biglab/uiclip_human_data_hf
|
8 |
+
base_model:
|
9 |
+
- openai/clip-vit-base-patch32
|
10 |
---
|
11 |
|
12 |
# Model Card for Model ID
|
13 |
|
14 |
+
UIClip is a model designed to quantify the design quality and releveance of a user interface (UI) screenshot given a textual description.
|
|
|
|
|
|
|
|
|
15 |
|
16 |
### Model Description
|
17 |
|
18 |
+
UIClip is a model designed to quantify the design quality and releveance of a user interface (UI) screenshot given a textual description.
|
19 |
+
This model can also be used to generate natural language design suggestions (see paper).
|
20 |
+
This is a model described in the publication "UIClip: A Data-driven Model for Assessing User Interface Design" presented at UIST 2024 (https://arxiv.org/abs/2404.12500).
|
21 |
+
|
22 |
+
User interface (UI) design is a difficult yet important task for ensuring the usability, accessibility, and aesthetic qualities of applications. In our paper, we develop a machine-learned model, UIClip, for assessing the design quality and visual relevance of a UI given its screenshot and natural language description. To train UIClip, we used a combination of automated crawling, synthetic augmentation, and human ratings to construct a large-scale dataset of UIs, collated by description and ranked by design quality. Through training on the dataset, UIClip implicitly learns properties of good and bad designs by i) assigning a numerical score that represents a UI design's relevance and quality and ii) providing design suggestions. In an evaluation that compared the outputs of UIClip and other baselines to UIs rated by 12 human designers, we found that UIClip achieved the highest agreement with ground-truth rankings. Finally, we present three example applications that demonstrate how UIClip can facilitate downstream applications that rely on instantaneous assessment of UI design quality: i) UI code generation, ii) UI design tips generation, and iii) quality-aware UI example search.
|
23 |
+
|
24 |
+
|
25 |
+
- **Developed by:** BigLab
|
26 |
+
- **Model type:** CLIP-style Multi-modal Dual-encoder Transformer
|
27 |
+
- **Language(s) (NLP):** English
|
28 |
+
- **License:** MIT
|
29 |
+
|
30 |
+
### Example Code
|
31 |
+
```python
|
32 |
+
import torch
|
33 |
+
from transformers import CLIPProcessor, CLIPModel
|
34 |
+
|
35 |
+
IMG_SIZE = 224
|
36 |
+
DEVICE = "cpu" # can also be "cuda" or "mps"
|
37 |
+
LOGIT_SCALE = 100 # based on OpenAI's CLIP example code
|
38 |
+
NORMALIZE_SCORING = True
|
39 |
+
|
40 |
+
model_path="uiclip_jitteredwebsites-2-224-paraphrased_webpairs_humanpairs" # can also be regular or web pairs variants
|
41 |
+
processor_path="openai/clip-vit-base-patch32"
|
42 |
+
|
43 |
+
model = CLIPModel.from_pretrained(model_path)
|
44 |
+
model = model.eval()
|
45 |
+
model = model.to(DEVICE)
|
46 |
+
|
47 |
+
processor = CLIPProcessor.from_pretrained(processor_path)
|
48 |
+
|
49 |
+
def compute_quality_scores(input_list):
|
50 |
+
# input_list is a list of types where the first element is a description and the second is a PIL image
|
51 |
+
description_list = ["ui screenshot. well-designed. " + input_item[0] for input_item in input_list]
|
52 |
+
img_list = [input_item[1] for input_item in input_list]
|
53 |
+
text_embeddings_tensor = compute_description_embeddings(description_list) # B x H
|
54 |
+
img_embeddings_tensor = compute_image_embeddings(img_list) # B x H
|
55 |
+
|
56 |
+
# normalize tensors
|
57 |
+
text_embeddings_tensor /= text_embeddings_tensor.norm(dim=-1, keepdim=True)
|
58 |
+
img_embeddings_tensor /= img_embeddings_tensor.norm(dim=-1, keepdim=True)
|
59 |
+
|
60 |
+
if NORMALIZE_SCORING:
|
61 |
+
text_embeddings_tensor_poor = compute_description_embeddings([d.replace("well-designed. ", "poor design. ") for d in description_list]) # B x H
|
62 |
+
text_embeddings_tensor_poor /= text_embeddings_tensor_poor.norm(dim=-1, keepdim=True)
|
63 |
+
text_embeddings_tensor_all = torch.stack((text_embeddings_tensor, text_embeddings_tensor_poor), dim=1) # B x 2 x H
|
64 |
+
else:
|
65 |
+
text_embeddings_tensor_all = text_embeddings_tensor.unsqueeze(1)
|
66 |
+
|
67 |
+
img_embeddings_tensor = img_embeddings_tensor.unsqueeze(1) # B x 1 x H
|
68 |
+
|
69 |
+
scores = (LOGIT_SCALE * img_embeddings_tensor @ text_embeddings_tensor_all.permute(0, 2, 1)).squeeze(1)
|
70 |
+
|
71 |
+
if NORMALIZE_SCORING:
|
72 |
+
scores = scores.softmax(dim=-1)
|
73 |
+
|
74 |
+
return scores[:, 0]
|
75 |
+
|
76 |
+
def compute_description_embeddings(descriptions):
|
77 |
+
inputs = processor(text=descriptions, return_tensors="pt", padding=True)
|
78 |
+
inputs['input_ids'] = inputs['input_ids'].to(DEVICE)
|
79 |
+
inputs['attention_mask'] = inputs['attention_mask'].to(DEVICE)
|
80 |
+
text_embedding = model.get_text_features(**inputs)
|
81 |
+
return text_embedding
|
82 |
+
|
83 |
+
def compute_image_embeddings(image_list):
|
84 |
+
windowed_batch = [slide_window_over_image(img, IMG_SIZE) for img in image_list]
|
85 |
+
inds = []
|
86 |
+
for imgi in range(len(windowed_batch)):
|
87 |
+
inds.append([imgi for _ in windowed_batch[imgi]])
|
88 |
+
|
89 |
+
processed_batch = [item for sublist in windowed_batch for item in sublist]
|
90 |
+
inputs = processor(images=processed_batch, return_tensors="pt")
|
91 |
+
# run all sub windows of all images in batch through the model
|
92 |
+
inputs['pixel_values'] = inputs['pixel_values'].to(DEVICE)
|
93 |
+
with torch.no_grad():
|
94 |
+
image_features = model.get_image_features(**inputs)
|
95 |
+
|
96 |
+
# output contains all subwindows, need to mask for each image
|
97 |
+
processed_batch_inds = torch.tensor([item for sublist in inds for item in sublist]).long().to(image_features.device)
|
98 |
+
embed_list = []
|
99 |
+
for i in range(len(image_list)):
|
100 |
+
mask = processed_batch_inds == i
|
101 |
+
embed_list.append(image_features[mask].mean(dim=0))
|
102 |
+
image_embedding = torch.stack(embed_list, dim=0)
|
103 |
+
return image_embedding
|
104 |
+
|
105 |
+
def preresize_image(image, image_size):
|
106 |
+
aspect_ratio = image.width / image.height
|
107 |
+
if aspect_ratio > 1:
|
108 |
+
image = image.resize((int(aspect_ratio * image_size), image_size))
|
109 |
+
else:
|
110 |
+
image = image.resize((image_size, int(image_size / aspect_ratio)))
|
111 |
+
return image
|
112 |
+
|
113 |
+
def slide_window_over_image(input_image, img_size):
|
114 |
+
input_image = preresize_image(input_image, img_size)
|
115 |
+
width, height = input_image.size
|
116 |
+
square_size = min(width, height)
|
117 |
+
longer_dimension = max(width, height)
|
118 |
+
num_steps = (longer_dimension + square_size - 1) // square_size
|
119 |
+
|
120 |
+
if num_steps > 1:
|
121 |
+
step_size = (longer_dimension - square_size) // (num_steps - 1)
|
122 |
+
else:
|
123 |
+
step_size = square_size
|
124 |
+
|
125 |
+
cropped_images = []
|
126 |
+
|
127 |
+
for y in range(0, height - square_size + 1, step_size if height > width else square_size):
|
128 |
+
for x in range(0, width - square_size + 1, step_size if width > height else square_size):
|
129 |
+
left = x
|
130 |
+
upper = y
|
131 |
+
right = x + square_size
|
132 |
+
lower = y + square_size
|
133 |
+
cropped_image = input_image.crop((left, upper, right, lower))
|
134 |
+
cropped_images.append(cropped_image)
|
135 |
+
|
136 |
+
return cropped_images
|
137 |
+
|
138 |
+
|
139 |
+
# compute the quality scores for a list of descriptions (strings) and images (PIL images)
|
140 |
+
prediction_scores = compute_quality_scores(list(zip(test_descriptions, test_images)))
|
141 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|