magicfixeseverything
commited on
Commit
•
cf6ac90
1
Parent(s):
0826f53
Upload 2 files
Browse files- Instructions.txt +27 -16
- app.py +796 -227
Instructions.txt
CHANGED
@@ -221,22 +221,28 @@ cd C:\Diffusers && .venv\Scripts\activate.bat && pip install diffusers transform
|
|
221 |
A whole bunch of things, what might be 100 packages or more, will download
|
222 |
and install, including any packages needed to run these.
|
223 |
|
224 |
-
When complete,
|
225 |
-
bug that is needed for the menus to work properly.
|
|
|
226 |
|
227 |
-
cd C:\Diffusers && .venv\Scripts\activate.bat && pip install https://gradio-builds.s3.amazonaws.com/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
228 |
|
229 |
The gallery feature doesn't allow images to be downloaded using the download
|
230 |
button in the current version above. Hopefully that will work in a later
|
231 |
version. (so eventually you will need to try another Gradio version)
|
232 |
|
233 |
-
That command is described here:
|
234 |
-
|
235 |
-
https://www.gradio.app/main/docs/interface
|
236 |
-
|
237 |
When a later version is eventually called, "gradio" will eventually be added
|
238 |
-
to the
|
239 |
-
where to install from.
|
240 |
|
241 |
When complete, move on to the next step.
|
242 |
|
@@ -404,11 +410,16 @@ cd C:\Diffusers && .venv\Scripts\activate.bat && py .venv\ai_image_creation\app.
|
|
404 |
Step 13 (Important):
|
405 |
|
406 |
I feel this is a very important step. After you have created model data for
|
407 |
-
each base model,
|
408 |
-
gigabytes or more of data. I
|
409 |
-
|
410 |
-
delete old data. If you didn't
|
411 |
-
|
|
|
|
|
|
|
|
|
|
|
412 |
|
413 |
Once you have downloaded the model data for each model, you can disable
|
414 |
updating by doing the following in a command prompt:
|
@@ -434,10 +445,10 @@ setx HF_HUB_OFFLINE "0" && REG DELETE HKEY_CURRENT_USER\Environment /v HF_HUB_OF
|
|
434 |
effect, once you restart the script, to be able to download model data
|
435 |
again.
|
436 |
|
437 |
-
You can read about environment variables here at
|
438 |
https://huggingface.co/docs/huggingface_hub/package_reference/environment_variables#hfhuboffline
|
439 |
|
440 |
-
In regard to
|
441 |
https://huggingface.co/docs/huggingface_hub/how-to-cachehttps://huggingface.co/docs/huggingface_hub/how-to-cache#limitations
|
442 |
|
443 |
This is just my preferred way of handling it.
|
|
|
221 |
A whole bunch of things, what might be 100 packages or more, will download
|
222 |
and install, including any packages needed to run these.
|
223 |
|
224 |
+
When complete, you might need to install a later version of Gradio. A later
|
225 |
+
version fixes a bug that is needed for the menus to work properly. This
|
226 |
+
worked for me:
|
227 |
|
228 |
+
cd C:\Diffusers && .venv\Scripts\activate.bat && pip install https://gradio-builds.s3.amazonaws.com/6b1401c514c2ec012b0a50c72a6ec81cb673bf1d/gradio-4.8.0-py3-none-any.whl
|
229 |
+
|
230 |
+
That was found here:
|
231 |
+
|
232 |
+
https://www.gradio.app/docs/blocks
|
233 |
+
|
234 |
+
After selecting "main" from the version number dropdown in the left column.
|
235 |
+
The link changes. If you don't want to doit that way, just do this instead
|
236 |
+
to see if the dropdown menus work:
|
237 |
+
|
238 |
+
cd C:\Diffusers && .venv\Scripts\activate.bat && pip install gradio
|
239 |
|
240 |
The gallery feature doesn't allow images to be downloaded using the download
|
241 |
button in the current version above. Hopefully that will work in a later
|
242 |
version. (so eventually you will need to try another Gradio version)
|
243 |
|
|
|
|
|
|
|
|
|
244 |
When a later version is eventually called, "gradio" will eventually be added
|
245 |
+
to the original command rather than having to specify it separately.
|
|
|
246 |
|
247 |
When complete, move on to the next step.
|
248 |
|
|
|
410 |
Step 13 (Important):
|
411 |
|
412 |
I feel this is a very important step. After you have created model data for
|
413 |
+
each base model, as well as have used the refiner and upscaler, data will
|
414 |
+
then have been downloaded. This could be 30 gigabytes or more of data. I
|
415 |
+
strongly recommend that you then disable the script from downloading updates
|
416 |
+
to the model data. It will not automatically delete old data. If you didn't
|
417 |
+
manually go through and delete the older data, eventually the model data
|
418 |
+
would use all of the space on your computer.
|
419 |
+
|
420 |
+
There is something that is very important to note however. If you do this,
|
421 |
+
other installations that use Hugging Face for example, like Automatic1111,
|
422 |
+
will not be able to download data and will not work properly.
|
423 |
|
424 |
Once you have downloaded the model data for each model, you can disable
|
425 |
updating by doing the following in a command prompt:
|
|
|
445 |
effect, once you restart the script, to be able to download model data
|
446 |
again.
|
447 |
|
448 |
+
You can read about environment variables here at Hugging Face:
|
449 |
https://huggingface.co/docs/huggingface_hub/package_reference/environment_variables#hfhuboffline
|
450 |
|
451 |
+
In regard to Hugging Face caching things, you can learn more on this page:
|
452 |
https://huggingface.co/docs/huggingface_hub/how-to-cachehttps://huggingface.co/docs/huggingface_hub/how-to-cache#limitations
|
453 |
|
454 |
This is just my preferred way of handling it.
|
app.py
CHANGED
@@ -4,6 +4,7 @@ import torch
|
|
4 |
import modin.pandas as pd
|
5 |
from PIL import Image
|
6 |
from diffusers import DiffusionPipeline
|
|
|
7 |
|
8 |
##########
|
9 |
|
@@ -66,11 +67,11 @@ main_dir = "C:/Diffusers"
|
|
66 |
####################
|
67 |
|
68 |
#
|
69 |
-
# Use Custom
|
70 |
#
|
71 |
# The folder where model data is stored can get huge. I choose to add it
|
72 |
# to a place where I am more likely to notice it more often. If you use
|
73 |
-
# other
|
74 |
# other things, then you might want to consider not having this here as
|
75 |
# it would duplicate the model data.
|
76 |
#
|
@@ -171,7 +172,9 @@ make_seed_selection_a_textbox = 0
|
|
171 |
#
|
172 |
# Include Close Command Prompt / Cancel Button
|
173 |
#
|
174 |
-
# This doesn't work well at all. It just closes the command prompt.
|
|
|
|
|
175 |
#
|
176 |
|
177 |
enable_close_command_prompt_button = 0
|
@@ -229,6 +232,36 @@ show_download_button_for_gallery = 0
|
|
229 |
|
230 |
####################
|
231 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
232 |
#
|
233 |
# Up Next Is Various Configuration Arrays and Objects
|
234 |
#
|
@@ -341,6 +374,21 @@ model_configuration_force_refiner_object = {
|
|
341 |
"sdxl_2023-09-05": 1
|
342 |
}
|
343 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
344 |
####################
|
345 |
|
346 |
base_model_model_configuration_defaults_object = {
|
@@ -403,7 +451,7 @@ if device == "cpu":
|
|
403 |
|
404 |
####################
|
405 |
|
406 |
-
default_prompt = ""
|
407 |
default_negative_prompt = ""
|
408 |
|
409 |
default_width = 768
|
@@ -414,7 +462,8 @@ default_guidance_scale_value = 7
|
|
414 |
default_base_model_base_model_num_inference_steps = 50
|
415 |
default_base_model_base_model_num_inference_steps_for_sdxl_turbo = 2
|
416 |
|
417 |
-
default_seed_maximum = 999999999999999999
|
|
|
418 |
default_seed_value = 876678173805928800
|
419 |
|
420 |
# If you turn off the refiner it will not be available in the display unless
|
@@ -428,6 +477,14 @@ enable_upscaler = 1
|
|
428 |
default_refiner_selected = 0
|
429 |
default_upscaler_selected = 0
|
430 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
431 |
# xFormers:
|
432 |
#
|
433 |
# https://huggingface.co/docs/diffusers/optimization/xformers
|
@@ -492,16 +549,11 @@ width_and_height_input_slider_steps = 8
|
|
492 |
|
493 |
|
494 |
|
495 |
-
show_messages_in_command_prompt = 1
|
496 |
-
show_messages_in_modal_on_page = 1
|
497 |
-
|
498 |
-
|
499 |
-
|
500 |
opening_html = ""
|
501 |
|
502 |
if device == "cpu":
|
503 |
|
504 |
-
opening_html = "<span style=\"font-weight: bold; color:
|
505 |
|
506 |
|
507 |
|
@@ -527,19 +579,6 @@ number_of_reserved_tokens = 2
|
|
527 |
|
528 |
|
529 |
|
530 |
-
|
531 |
-
|
532 |
-
|
533 |
-
|
534 |
-
# This will eventually be a configuration option...
|
535 |
-
|
536 |
-
# "pil" image
|
537 |
-
# "latent" latent space
|
538 |
-
|
539 |
-
which_output_type_before_refiner_and_upscaler = "latent"
|
540 |
-
|
541 |
-
|
542 |
-
|
543 |
###############################################################################
|
544 |
###############################################################################
|
545 |
#
|
@@ -552,8 +591,20 @@ which_output_type_before_refiner_and_upscaler = "latent"
|
|
552 |
###############################################################################
|
553 |
###############################################################################
|
554 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
555 |
|
556 |
-
import os
|
557 |
|
558 |
try:
|
559 |
if (str(os.uname()).find("magicfixeseverything") >= 0):
|
@@ -569,6 +620,13 @@ if script_being_run_on_hugging_face == 1:
|
|
569 |
auto_save_imagery = 0
|
570 |
show_messages_in_modal_on_page = 0
|
571 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
572 |
ending_html = """
|
573 |
If you would like to download this app to run offline on a Windows computer that has a NVIDIA graphics card, click <a href=\"https://huggingface.co/spaces/magicfixeseverything/ai_image_creation/resolve/main/ai_image_creation.zip\">here</a> to download it.
|
574 |
|
@@ -771,6 +829,20 @@ default_refiner_and_upscaler_status_text = refiner_and_upscaler_status_opening_h
|
|
771 |
|
772 |
|
773 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
774 |
refiner_default_config_accordion_visible = True
|
775 |
|
776 |
if (
|
@@ -783,8 +855,11 @@ if (
|
|
783 |
refiner_default_config_accordion_open = False
|
784 |
|
785 |
if (
|
786 |
-
(
|
787 |
-
(
|
|
|
|
|
|
|
788 |
):
|
789 |
|
790 |
refiner_default_config_accordion_open = True
|
@@ -803,8 +878,11 @@ if (
|
|
803 |
refiner_online_config_accordion_open = False
|
804 |
|
805 |
if (
|
806 |
-
(
|
807 |
-
(
|
|
|
|
|
|
|
808 |
):
|
809 |
|
810 |
refiner_online_config_accordion_open = True
|
@@ -827,7 +905,10 @@ if enable_refiner == 1:
|
|
827 |
|
828 |
upscaler_accordion_open = False
|
829 |
|
830 |
-
if
|
|
|
|
|
|
|
831 |
|
832 |
upscaler_accordion_open = True
|
833 |
|
@@ -874,16 +955,25 @@ if default_base_model == "sdxl_turbo":
|
|
874 |
|
875 |
|
876 |
|
877 |
-
global pipe
|
878 |
-
global refiner
|
879 |
-
global upscaler
|
880 |
-
|
881 |
last_model_configuration_name_value = ""
|
882 |
last_refiner_selected = ""
|
883 |
last_upscaler_selected = ""
|
884 |
|
885 |
|
886 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
887 |
default_base_model_choices_array = []
|
888 |
|
889 |
stored_model_configuration_names_object = {}
|
@@ -986,7 +1076,7 @@ def convert_seconds(
|
|
986 |
def seed_not_valid(seed_num_str):
|
987 |
try:
|
988 |
seed_num = int(seed_num_str)
|
989 |
-
if (seed_num > 0) and (seed_num
|
990 |
return False
|
991 |
else:
|
992 |
return True
|
@@ -1125,6 +1215,238 @@ def update_prompt_info_from_gallery (
|
|
1125 |
|
1126 |
|
1127 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1128 |
#####################
|
1129 |
#
|
1130 |
# Create Image Function
|
@@ -1145,9 +1467,11 @@ def create_image_function (
|
|
1145 |
base_model_num_inference_steps_field_for_sdxl_turbo,
|
1146 |
actual_seed,
|
1147 |
|
|
|
1148 |
refining_selection_online_config_normal_field_value,
|
1149 |
refining_selection_online_config_automatically_selected_field_value,
|
1150 |
|
|
|
1151 |
refining_use_denoising_start_in_base_model_when_using_refiner_field_value,
|
1152 |
refining_base_model_output_to_refiner_is_in_latent_space_field_value,
|
1153 |
|
@@ -1158,6 +1482,16 @@ def create_image_function (
|
|
1158 |
upscaling_num_inference_steps
|
1159 |
):
|
1160 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1161 |
refining_selection_online_config_normal_field_value = numerical_bool(refining_selection_online_config_normal_field_value)
|
1162 |
refining_selection_online_config_automatically_selected_field_value = numerical_bool(refining_selection_online_config_automatically_selected_field_value)
|
1163 |
|
@@ -1165,6 +1499,8 @@ def create_image_function (
|
|
1165 |
refining_use_denoising_start_in_base_model_when_using_refiner_field_value = numerical_bool(refining_use_denoising_start_in_base_model_when_using_refiner_field_value)
|
1166 |
refining_base_model_output_to_refiner_is_in_latent_space_field_value = numerical_bool(refining_base_model_output_to_refiner_is_in_latent_space_field_value)
|
1167 |
|
|
|
|
|
1168 |
use_upscaler = numerical_bool(upscaling_selection_field_value)
|
1169 |
|
1170 |
|
@@ -1174,23 +1510,36 @@ def create_image_function (
|
|
1174 |
|
1175 |
|
1176 |
|
1177 |
-
|
|
|
|
|
|
|
|
|
|
|
1178 |
|
1179 |
if model_configuration_name_value in default_model_configuration_object:
|
1180 |
|
1181 |
-
|
1182 |
|
1183 |
-
use_refiner = 0
|
1184 |
|
1185 |
|
|
|
1186 |
|
1187 |
if (
|
1188 |
(
|
1189 |
-
(
|
1190 |
-
|
1191 |
) or (
|
1192 |
-
(
|
1193 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1194 |
)
|
1195 |
):
|
1196 |
|
@@ -1202,6 +1551,7 @@ def create_image_function (
|
|
1202 |
|
1203 |
negative_prompt_text = ""
|
1204 |
base_model_num_inference_steps = base_model_num_inference_steps_field_for_sdxl_turbo
|
|
|
1205 |
guidance_scale = 0
|
1206 |
|
1207 |
|
@@ -1220,11 +1570,14 @@ def create_image_function (
|
|
1220 |
(model_configuration_name_value != last_model_configuration_name_value)
|
1221 |
):
|
1222 |
|
1223 |
-
|
|
|
1224 |
|
1225 |
if (last_model_configuration_name_value != ""):
|
1226 |
|
1227 |
-
del pipe
|
|
|
|
|
1228 |
|
1229 |
if 'refiner' in globals():
|
1230 |
del refiner
|
@@ -1309,7 +1662,8 @@ def create_image_function (
|
|
1309 |
|
1310 |
if use_refiner == 1:
|
1311 |
|
1312 |
-
|
|
|
1313 |
|
1314 |
refiner_kwargs = {
|
1315 |
"use_safetensors": True
|
@@ -1325,7 +1679,7 @@ def create_image_function (
|
|
1325 |
refiner_kwargs["cache_dir"] = hugging_face_cache_dir
|
1326 |
|
1327 |
refiner = DiffusionPipeline.from_pretrained(
|
1328 |
-
|
1329 |
**refiner_kwargs
|
1330 |
)
|
1331 |
|
@@ -1360,7 +1714,8 @@ def create_image_function (
|
|
1360 |
|
1361 |
if use_upscaler == 1:
|
1362 |
|
1363 |
-
|
|
|
1364 |
|
1365 |
upscaler_kwargs = {
|
1366 |
"use_safetensors": True
|
@@ -1368,7 +1723,6 @@ def create_image_function (
|
|
1368 |
|
1369 |
if device == "cuda":
|
1370 |
|
1371 |
-
upscaler_kwargs["variant"] = "fp16"
|
1372 |
upscaler_kwargs["torch_dtype"] = torch.float16
|
1373 |
|
1374 |
if use_custom_hugging_face_cache_dir == 1:
|
@@ -1376,7 +1730,7 @@ def create_image_function (
|
|
1376 |
upscaler_kwargs["cache_dir"] = hugging_face_cache_dir
|
1377 |
|
1378 |
upscaler = DiffusionPipeline.from_pretrained(
|
1379 |
-
|
1380 |
**upscaler_kwargs
|
1381 |
)
|
1382 |
|
@@ -1498,6 +1852,18 @@ def create_image_function (
|
|
1498 |
|
1499 |
|
1500 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1501 |
if model_configuration_name_value.find("default") < 0:
|
1502 |
|
1503 |
|
@@ -1543,43 +1909,90 @@ def create_image_function (
|
|
1543 |
|
1544 |
upscaling_num_inference_steps = 5
|
1545 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1546 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1547 |
|
1548 |
-
show_message("Initial image creation has begun.");
|
1549 |
-
int_image = pipe(prompt, prompt_2=prompt_2, negative_prompt=negative_prompt, negative_prompt_2=negative_prompt_2, num_inference_steps=steps, height=height, width=width, guidance_scale=scale, num_images_per_prompt=1, generator=generator, output_type="latent").images
|
1550 |
if upscaling == 'Yes':
|
1551 |
-
|
1552 |
-
|
1553 |
-
|
|
|
|
|
|
|
1554 |
|
1555 |
# Changed
|
1556 |
#
|
1557 |
# num_inference_steps=15
|
1558 |
#
|
1559 |
|
1560 |
-
upscaled = upscaler(
|
1561 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
1562 |
if device == "cuda":
|
1563 |
torch.cuda.empty_cache()
|
1564 |
|
1565 |
-
# Changed
|
1566 |
-
#
|
1567 |
-
# return (image, upscaled)
|
1568 |
-
#
|
1569 |
-
|
1570 |
image_to_return = upscaled
|
1571 |
|
1572 |
else:
|
1573 |
-
|
1574 |
-
image = refiner(prompt=prompt, prompt_2=prompt_2, negative_prompt=negative_prompt, negative_prompt_2=negative_prompt_2, image=int_image, num_inference_steps=n_steps ,denoising_start=high_noise_frac).images[0]
|
1575 |
-
# torch.cuda.empty_cache()
|
1576 |
if device == "cuda":
|
1577 |
torch.cuda.empty_cache()
|
1578 |
|
1579 |
-
# Changed
|
1580 |
-
#
|
1581 |
-
# return (image, image)
|
1582 |
-
#
|
1583 |
image_to_return = image
|
1584 |
|
1585 |
|
@@ -1595,13 +2008,66 @@ def create_image_function (
|
|
1595 |
|
1596 |
|
1597 |
if upscale == "Yes":
|
1598 |
-
|
1599 |
-
|
1600 |
-
|
1601 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1602 |
else:
|
1603 |
-
|
1604 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1605 |
|
1606 |
|
1607 |
|
@@ -1619,18 +2085,72 @@ def create_image_function (
|
|
1619 |
#
|
1620 |
#
|
1621 |
|
|
|
1622 |
|
|
|
1623 |
|
1624 |
-
|
1625 |
|
1626 |
-
|
1627 |
|
1628 |
-
|
1629 |
|
1630 |
-
|
|
|
|
|
1631 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1632 |
print ("Initial image steps...");
|
1633 |
|
|
|
|
|
|
|
1634 |
intitial_image = pipe(
|
1635 |
prompt = prompt_text,
|
1636 |
negative_prompt = negative_prompt_text,
|
@@ -1640,27 +2160,33 @@ def create_image_function (
|
|
1640 |
guidance_scale = guidance_scale,
|
1641 |
num_images_per_prompt = 1,
|
1642 |
generator = generator,
|
1643 |
-
|
1644 |
-
output_type =
|
|
|
1645 |
).images
|
1646 |
|
1647 |
if show_messages_in_command_prompt == 1:
|
1648 |
-
|
1649 |
print ("Refiner steps...");
|
1650 |
|
|
|
|
|
|
|
1651 |
refined_image = refiner(
|
1652 |
prompt = prompt_text,
|
1653 |
negative_prompt = negative_prompt_text,
|
1654 |
image = intitial_image,
|
1655 |
num_inference_steps = base_model_num_inference_steps,
|
1656 |
-
denoising_start =
|
1657 |
-
output_type = "pil"
|
|
|
1658 |
).images
|
1659 |
|
1660 |
if show_messages_in_command_prompt == 1:
|
1661 |
-
|
1662 |
print ("Upscaler steps...");
|
1663 |
|
|
|
|
|
|
|
1664 |
upscaled_image = upscaler(
|
1665 |
prompt = prompt_text,
|
1666 |
negative_prompt = negative_prompt_text,
|
@@ -1676,58 +2202,40 @@ def create_image_function (
|
|
1676 |
|
1677 |
else:
|
1678 |
|
1679 |
-
show_message("Will create initial image and then refine");
|
1680 |
-
|
1681 |
if show_messages_in_command_prompt == 1:
|
1682 |
-
|
1683 |
print ("Initial image steps...");
|
1684 |
|
|
|
|
|
|
|
1685 |
intitial_image = pipe(
|
1686 |
prompt = prompt_text,
|
1687 |
negative_prompt = negative_prompt_text,
|
1688 |
width = image_width,
|
1689 |
height = image_height,
|
1690 |
-
|
1691 |
-
|
1692 |
-
|
1693 |
num_inference_steps = base_model_num_inference_steps,
|
1694 |
-
|
1695 |
-
|
1696 |
-
|
1697 |
-
#testing
|
1698 |
-
# num_inference_steps = 100,
|
1699 |
-
# denoising_end = 0.75,
|
1700 |
-
|
1701 |
-
|
1702 |
-
|
1703 |
guidance_scale = guidance_scale,
|
1704 |
num_images_per_prompt = 1,
|
1705 |
generator = generator,
|
1706 |
-
|
|
|
|
|
1707 |
).images
|
1708 |
|
1709 |
if show_messages_in_command_prompt == 1:
|
1710 |
-
|
1711 |
print ("Refiner steps...");
|
1712 |
|
|
|
|
|
|
|
1713 |
refined_image = refiner(
|
1714 |
prompt = prompt_text,
|
1715 |
negative_prompt = negative_prompt_text,
|
1716 |
image = intitial_image,
|
1717 |
-
|
1718 |
-
|
1719 |
-
|
1720 |
-
#testing
|
1721 |
-
# num_inference_steps = base_model_num_inference_steps,
|
1722 |
-
# denoising_start = refining_denoise_start_for_default_config
|
1723 |
-
|
1724 |
-
|
1725 |
-
|
1726 |
-
num_inference_steps = 60,
|
1727 |
-
denoising_start = 0.25
|
1728 |
-
|
1729 |
-
|
1730 |
-
|
1731 |
).images[0]
|
1732 |
|
1733 |
if device == "cuda":
|
@@ -1739,12 +2247,13 @@ def create_image_function (
|
|
1739 |
|
1740 |
if use_upscaler == 1:
|
1741 |
|
1742 |
-
show_message("Will create initial image and then upscale");
|
1743 |
-
|
1744 |
if show_messages_in_command_prompt == 1:
|
1745 |
-
|
1746 |
print ("Initial image steps...");
|
1747 |
|
|
|
|
|
|
|
1748 |
intitial_image = pipe(
|
1749 |
prompt = prompt_text,
|
1750 |
negative_prompt = negative_prompt_text,
|
@@ -1754,14 +2263,16 @@ def create_image_function (
|
|
1754 |
guidance_scale = guidance_scale,
|
1755 |
num_images_per_prompt = 1,
|
1756 |
generator = generator,
|
1757 |
-
|
1758 |
-
|
1759 |
).images
|
1760 |
|
1761 |
if show_messages_in_command_prompt == 1:
|
1762 |
-
|
1763 |
print ("Upscaler steps...");
|
1764 |
|
|
|
|
|
|
|
1765 |
upscaled_image = upscaler(
|
1766 |
prompt = prompt_text,
|
1767 |
negative_prompt = negative_prompt_text,
|
@@ -1777,12 +2288,13 @@ def create_image_function (
|
|
1777 |
|
1778 |
else:
|
1779 |
|
1780 |
-
show_message("Will create image (no refining or upscaling)");
|
1781 |
-
|
1782 |
if show_messages_in_command_prompt == 1:
|
1783 |
-
|
1784 |
print ("Image steps...");
|
1785 |
|
|
|
|
|
|
|
1786 |
image = pipe(
|
1787 |
prompt = prompt_text,
|
1788 |
negative_prompt = negative_prompt_text,
|
@@ -1791,7 +2303,8 @@ def create_image_function (
|
|
1791 |
num_inference_steps = base_model_num_inference_steps,
|
1792 |
guidance_scale = guidance_scale,
|
1793 |
num_images_per_prompt = 1,
|
1794 |
-
generator = generator
|
|
|
1795 |
).images[0]
|
1796 |
|
1797 |
if device == "cuda":
|
@@ -1851,51 +2364,17 @@ def create_image_function (
|
|
1851 |
])
|
1852 |
|
1853 |
info_about_prompt_lines_array.extend([
|
1854 |
-
"
|
1855 |
"Model: " + nice_model_name
|
1856 |
])
|
1857 |
|
1858 |
-
if use_refiner == 1:
|
1859 |
-
|
1860 |
-
# Default Configuration
|
1861 |
-
|
1862 |
-
|
1863 |
-
|
1864 |
-
|
1865 |
-
|
1866 |
-
|
1867 |
-
|
1868 |
-
|
1869 |
-
|
1870 |
-
# not done yet
|
1871 |
-
|
1872 |
-
|
1873 |
-
|
1874 |
-
|
1875 |
-
|
1876 |
-
|
1877 |
-
|
1878 |
-
|
1879 |
-
|
1880 |
-
|
1881 |
-
# Online Configuration
|
1882 |
-
|
1883 |
-
if refining_denoise_start_for_online_config_field_value != 0:
|
1884 |
-
|
1885 |
-
nice_refiner_denoise_start = str(refining_denoise_start_for_online_config_field_value * 100) + "%"
|
1886 |
|
1887 |
-
info_about_prompt_lines_array.extend([
|
1888 |
-
"Refiner?: Yes",
|
1889 |
-
"Refiner denoise start %: " + nice_refiner_denoise_start
|
1890 |
-
])
|
1891 |
|
1892 |
-
|
1893 |
|
1894 |
-
|
1895 |
|
1896 |
-
|
1897 |
-
"Refiner number of iterations: " + nice_refiner_number_of_iterations
|
1898 |
-
])
|
1899 |
|
1900 |
if use_upscaler == 1:
|
1901 |
|
@@ -2069,8 +2548,8 @@ def create_image_function (
|
|
2069 |
# Cancel Image Processing
|
2070 |
#
|
2071 |
# When running on Windows, this is an attempt at closing the command
|
2072 |
-
# prompt from the web
|
2073 |
-
#
|
2074 |
# creation, but couldn't figure that out.
|
2075 |
#
|
2076 |
#####################
|
@@ -2216,11 +2695,11 @@ def model_configuration_field_update_function(
|
|
2216 |
|
2217 |
|
2218 |
|
2219 |
-
|
2220 |
|
2221 |
if model_configuration_name_value in default_model_configuration_object:
|
2222 |
|
2223 |
-
|
2224 |
|
2225 |
negative_prompt_field_visibility = True
|
2226 |
negative_prompt_for_sdxl_turbo_field_visibility = False
|
@@ -2267,7 +2746,7 @@ def model_configuration_field_update_function(
|
|
2267 |
refiner_default_config_accordion_visibility = False
|
2268 |
refiner_online_config_accordion_visibility = True
|
2269 |
|
2270 |
-
if
|
2271 |
|
2272 |
refiner_default_config_accordion_visibility = True
|
2273 |
refiner_online_config_accordion_visibility = False
|
@@ -2286,6 +2765,12 @@ def model_configuration_field_update_function(
|
|
2286 |
refining_selection_online_config_normal_field_visibility = False
|
2287 |
refining_selection_online_config_automatically_selected_field_visibility = True
|
2288 |
|
|
|
|
|
|
|
|
|
|
|
|
|
2289 |
|
2290 |
|
2291 |
refiner_default_config_accordion_update = gr.Accordion(
|
@@ -2308,6 +2793,10 @@ def model_configuration_field_update_function(
|
|
2308 |
visible = refining_selection_online_config_automatically_selected_field_visibility
|
2309 |
)
|
2310 |
|
|
|
|
|
|
|
|
|
2311 |
|
2312 |
|
2313 |
return {
|
@@ -2322,7 +2811,9 @@ def model_configuration_field_update_function(
|
|
2322 |
refiner_online_config_accordion: refiner_online_config_accordion_update,
|
2323 |
refining_selection_automatically_selected_message_field: refining_selection_automatically_selected_message_field_update,
|
2324 |
refining_selection_online_config_normal_field: refining_selection_online_config_normal_field_update,
|
2325 |
-
refining_selection_online_config_automatically_selected_field: refining_selection_online_config_automatically_selected_field_update
|
|
|
|
|
2326 |
|
2327 |
}
|
2328 |
|
@@ -2358,11 +2849,11 @@ def update_refiner_and_upscaler_status_function(
|
|
2358 |
|
2359 |
model_configuration_name_value = base_model_object_of_model_configuration_arrays[base_model_field_value][model_configuration_field_index]
|
2360 |
|
2361 |
-
|
2362 |
|
2363 |
if model_configuration_name_value in default_model_configuration_object:
|
2364 |
|
2365 |
-
|
2366 |
|
2367 |
refining_selection_default_config_field_value = numerical_bool(refining_selection_default_config_field_value)
|
2368 |
refining_selection_online_config_normal_field_value = numerical_bool(refining_selection_online_config_normal_field_value)
|
@@ -2373,11 +2864,19 @@ def update_refiner_and_upscaler_status_function(
|
|
2373 |
|
2374 |
if (
|
2375 |
(
|
2376 |
-
(
|
2377 |
-
|
2378 |
) or (
|
2379 |
-
(
|
2380 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2381 |
)
|
2382 |
):
|
2383 |
|
@@ -2436,15 +2935,25 @@ def update_refiner_and_upscaler_status_function(
|
|
2436 |
# Hide border when yield is used:
|
2437 |
# https://github.com/gradio-app/gradio/issues/5479
|
2438 |
# .generating {border: none !important;}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2439 |
|
2440 |
with gr.Blocks(
|
2441 |
title = "AI Image Creation",
|
2442 |
-
css =
|
2443 |
theme = gr.themes.Default(
|
2444 |
spacing_size = gr.themes.sizes.spacing_md,
|
2445 |
# spacing_size = gr.themes.sizes.spacing_sm,
|
2446 |
radius_size = gr.themes.sizes.radius_none
|
2447 |
-
)
|
|
|
2448 |
) as sd_interface:
|
2449 |
|
2450 |
gr.Markdown(opening_html)
|
@@ -2487,7 +2996,7 @@ with gr.Blocks(
|
|
2487 |
):
|
2488 |
|
2489 |
with gr.Accordion(
|
2490 |
-
label = "Refiner (Default
|
2491 |
elem_id = "refiner_default_config_accordion_id",
|
2492 |
open = refiner_default_config_accordion_open,
|
2493 |
visible = refiner_default_config_accordion_visible
|
@@ -2496,7 +3005,7 @@ with gr.Blocks(
|
|
2496 |
#
|
2497 |
#
|
2498 |
#
|
2499 |
-
# Refiner (Default
|
2500 |
#
|
2501 |
#
|
2502 |
#
|
@@ -2516,44 +3025,40 @@ with gr.Blocks(
|
|
2516 |
|
2517 |
with gr.Row():
|
2518 |
|
2519 |
-
|
2520 |
-
label = "
|
2521 |
-
|
2522 |
-
|
2523 |
-
|
|
|
2524 |
)
|
2525 |
|
2526 |
with gr.Row():
|
2527 |
|
2528 |
-
|
2529 |
-
|
2530 |
-
value = default_base_model_output_to_refiner_is_in_latent_space,
|
2531 |
-
# interactive = True,
|
2532 |
-
container = True
|
2533 |
)
|
2534 |
|
2535 |
with gr.Row():
|
2536 |
|
2537 |
-
|
2538 |
-
label = "
|
2539 |
-
|
2540 |
-
|
2541 |
-
|
2542 |
-
step = 0.01
|
2543 |
)
|
2544 |
|
2545 |
-
|
2546 |
|
2547 |
-
|
2548 |
-
|
2549 |
-
|
2550 |
-
|
2551 |
-
|
2552 |
-
|
2553 |
-
# )
|
2554 |
|
2555 |
with gr.Accordion(
|
2556 |
-
label = "Refiner (Online
|
2557 |
elem_id = "refiner_online_config_accordion_id",
|
2558 |
open = refiner_online_config_accordion_open,
|
2559 |
visible = refiner_online_config_accordion_visible
|
@@ -2562,7 +3067,7 @@ with gr.Blocks(
|
|
2562 |
#
|
2563 |
#
|
2564 |
#
|
2565 |
-
# Refiner (Online
|
2566 |
#
|
2567 |
#
|
2568 |
#
|
@@ -2618,12 +3123,19 @@ with gr.Blocks(
|
|
2618 |
|
2619 |
with gr.Row():
|
2620 |
|
|
|
|
|
|
|
|
|
|
|
|
|
2621 |
refining_number_of_iterations_for_online_config_field = gr.Slider(
|
2622 |
label = "Refiner number of iterations",
|
2623 |
minimum = 1,
|
2624 |
maximum = 100,
|
2625 |
value = 100,
|
2626 |
-
step = 1
|
|
|
2627 |
)
|
2628 |
|
2629 |
with gr.Group(
|
@@ -2794,13 +3306,6 @@ with gr.Blocks(
|
|
2794 |
|
2795 |
with gr.Column(scale = 1):
|
2796 |
|
2797 |
-
# with gr.Row():
|
2798 |
-
|
2799 |
-
# generate_image_btn = gr.Button(
|
2800 |
-
# value = "Generate",
|
2801 |
-
# variant = "primary"
|
2802 |
-
# )
|
2803 |
-
|
2804 |
with gr.Row():
|
2805 |
|
2806 |
if use_image_gallery == 1:
|
@@ -2832,21 +3337,36 @@ with gr.Blocks(
|
|
2832 |
|
2833 |
with gr.Row():
|
2834 |
|
2835 |
-
output_text_field = gr.
|
2836 |
label = "Prompt Information:",
|
2837 |
value = "After an image is generated, its generation information will appear here." + additional_prompt_info_html,
|
2838 |
show_copy_button = True,
|
2839 |
-
lines =
|
|
|
|
|
|
|
2840 |
)
|
2841 |
|
2842 |
with gr.Row():
|
2843 |
|
2844 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
2845 |
value = "",
|
|
|
2846 |
visible = False
|
2847 |
)
|
2848 |
|
2849 |
-
|
|
|
|
|
|
|
|
|
|
|
2850 |
|
2851 |
if enable_close_command_prompt_button == 1:
|
2852 |
|
@@ -2857,6 +3377,8 @@ with gr.Blocks(
|
|
2857 |
|
2858 |
gr.Markdown("Closing the command prompt will cancel any images in the process of being created. You will need to launch it again to create more images.")
|
2859 |
|
|
|
|
|
2860 |
if len(ending_html) > 0:
|
2861 |
|
2862 |
with gr.Accordion(
|
@@ -2902,7 +3424,8 @@ with gr.Blocks(
|
|
2902 |
refiner_online_config_accordion,
|
2903 |
refining_selection_automatically_selected_message_field,
|
2904 |
refining_selection_online_config_normal_field,
|
2905 |
-
refining_selection_online_config_automatically_selected_field
|
|
|
2906 |
],
|
2907 |
queue = None,
|
2908 |
show_progress = "hidden"
|
@@ -2916,7 +3439,8 @@ with gr.Blocks(
|
|
2916 |
outputs = [
|
2917 |
output_image_field,
|
2918 |
output_text_field
|
2919 |
-
]
|
|
|
2920 |
)
|
2921 |
|
2922 |
if (
|
@@ -2931,7 +3455,8 @@ with gr.Blocks(
|
|
2931 |
triggers_array.extend([
|
2932 |
refining_selection_default_config_field.change,
|
2933 |
refining_selection_online_config_normal_field.change,
|
2934 |
-
refining_selection_online_config_automatically_selected_field.change
|
|
|
2935 |
])
|
2936 |
|
2937 |
if enable_upscaler == 1:
|
@@ -2959,6 +3484,17 @@ with gr.Blocks(
|
|
2959 |
)
|
2960 |
|
2961 |
generate_image_btn_click_event = generate_image_btn.click(
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2962 |
fn = create_image_function,
|
2963 |
inputs = [
|
2964 |
base_model_field,
|
@@ -2972,9 +3508,11 @@ with gr.Blocks(
|
|
2972 |
base_model_num_inference_steps_field_for_sdxl_turbo_field,
|
2973 |
seed_field,
|
2974 |
|
|
|
2975 |
refining_selection_online_config_normal_field,
|
2976 |
refining_selection_online_config_automatically_selected_field,
|
2977 |
|
|
|
2978 |
refining_use_denoising_start_in_base_model_when_using_refiner_field,
|
2979 |
refining_base_model_output_to_refiner_is_in_latent_space_field,
|
2980 |
|
@@ -2988,9 +3526,46 @@ with gr.Blocks(
|
|
2988 |
output_image_field,
|
2989 |
output_text_field,
|
2990 |
prompt_truncated_field
|
2991 |
-
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2992 |
)
|
2993 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2994 |
if enable_close_command_prompt_button == 1:
|
2995 |
|
2996 |
# https://github.com/gradio-app/gradio/pull/2433/files
|
@@ -3005,7 +3580,6 @@ with gr.Blocks(
|
|
3005 |
|
3006 |
|
3007 |
sd_interface.queue(
|
3008 |
-
# concurrency_limit = 1,
|
3009 |
max_size = 20
|
3010 |
)
|
3011 |
|
@@ -3024,8 +3598,3 @@ sd_interface.launch(
|
|
3024 |
show_error = True,
|
3025 |
max_threads = 1
|
3026 |
)
|
3027 |
-
|
3028 |
-
sd_interface.load(
|
3029 |
-
scroll_to_output = False,
|
3030 |
-
show_progress = "full"
|
3031 |
-
)
|
|
|
4 |
import modin.pandas as pd
|
5 |
from PIL import Image
|
6 |
from diffusers import DiffusionPipeline
|
7 |
+
import os
|
8 |
|
9 |
##########
|
10 |
|
|
|
67 |
####################
|
68 |
|
69 |
#
|
70 |
+
# Use Custom Hugging Face Cache Directory
|
71 |
#
|
72 |
# The folder where model data is stored can get huge. I choose to add it
|
73 |
# to a place where I am more likely to notice it more often. If you use
|
74 |
+
# other Hugging Face things however, and will use these models in those
|
75 |
# other things, then you might want to consider not having this here as
|
76 |
# it would duplicate the model data.
|
77 |
#
|
|
|
172 |
#
|
173 |
# Include Close Command Prompt / Cancel Button
|
174 |
#
|
175 |
+
# This doesn't work well at all. It just closes the command prompt. And
|
176 |
+
# it currently isn't canceling image creation either when used. Don't use
|
177 |
+
# it.
|
178 |
#
|
179 |
|
180 |
enable_close_command_prompt_button = 0
|
|
|
232 |
|
233 |
####################
|
234 |
|
235 |
+
#
|
236 |
+
# Show Image Creation Progress Log
|
237 |
+
#
|
238 |
+
# This adds the current step that image generation is on.
|
239 |
+
#
|
240 |
+
|
241 |
+
show_image_creation_progress_log = 1
|
242 |
+
|
243 |
+
####################
|
244 |
+
|
245 |
+
#
|
246 |
+
# Show Messages In Command Prompt
|
247 |
+
#
|
248 |
+
# Messages will be printed in command prompt.
|
249 |
+
#
|
250 |
+
|
251 |
+
show_messages_in_command_prompt = 1
|
252 |
+
|
253 |
+
####################
|
254 |
+
|
255 |
+
#
|
256 |
+
# Show Messages In Modal On Page
|
257 |
+
#
|
258 |
+
# A popup appears in the top right corner on the page.
|
259 |
+
#
|
260 |
+
|
261 |
+
show_messages_in_modal_on_page = 0
|
262 |
+
|
263 |
+
####################
|
264 |
+
|
265 |
#
|
266 |
# Up Next Is Various Configuration Arrays and Objects
|
267 |
#
|
|
|
374 |
"sdxl_2023-09-05": 1
|
375 |
}
|
376 |
|
377 |
+
# For now, the ones that force the refiner also have the "Refiner Number of
|
378 |
+
# Iterations" available.
|
379 |
+
|
380 |
+
model_configuration_include_refiner_number_of_steps_object = model_configuration_force_refiner_object
|
381 |
+
|
382 |
+
#model_configuration_include_refiner_number_of_steps_object = {
|
383 |
+
# "sdxl_2023-11-12": 1,
|
384 |
+
# "sdxl_2023-09-05": 1
|
385 |
+
#}
|
386 |
+
|
387 |
+
####################
|
388 |
+
|
389 |
+
hugging_face_refiner_partial_path = "stabilityai/stable-diffusion-xl-refiner-1.0"
|
390 |
+
hugging_face_upscaler_partial_path = "stabilityai/sd-x2-latent-upscaler"
|
391 |
+
|
392 |
####################
|
393 |
|
394 |
base_model_model_configuration_defaults_object = {
|
|
|
451 |
|
452 |
####################
|
453 |
|
454 |
+
default_prompt = "black cat"
|
455 |
default_negative_prompt = ""
|
456 |
|
457 |
default_width = 768
|
|
|
462 |
default_base_model_base_model_num_inference_steps = 50
|
463 |
default_base_model_base_model_num_inference_steps_for_sdxl_turbo = 2
|
464 |
|
465 |
+
#default_seed_maximum = 999999999999999999
|
466 |
+
default_seed_maximum = 1000000000000000000
|
467 |
default_seed_value = 876678173805928800
|
468 |
|
469 |
# If you turn off the refiner it will not be available in the display unless
|
|
|
477 |
default_refiner_selected = 0
|
478 |
default_upscaler_selected = 0
|
479 |
|
480 |
+
# Accordion visible on load?
|
481 |
+
#
|
482 |
+
# 0 If selected as default, will be open. Otherwise, closed.
|
483 |
+
# 1 Always starts open
|
484 |
+
|
485 |
+
default_refiner_accordion_open = 1
|
486 |
+
default_upscaler_accordion_open = 1
|
487 |
+
|
488 |
# xFormers:
|
489 |
#
|
490 |
# https://huggingface.co/docs/diffusers/optimization/xformers
|
|
|
549 |
|
550 |
|
551 |
|
|
|
|
|
|
|
|
|
|
|
552 |
opening_html = ""
|
553 |
|
554 |
if device == "cpu":
|
555 |
|
556 |
+
opening_html = "<span style=\"font-weight: bold; color: #c00;\">THIS APP IS EXCEPTIONALLY SLOW!</span><br/>This app is not running on a GPU. The first time it loads after the space is rebuilt it might take 10 minutes to generate a SDXL Turbo image. It may take 2 to 3 minutes after that point to do two steps. For other models, it may take hours to create a single image."
|
557 |
|
558 |
|
559 |
|
|
|
579 |
|
580 |
|
581 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
582 |
###############################################################################
|
583 |
###############################################################################
|
584 |
#
|
|
|
591 |
###############################################################################
|
592 |
###############################################################################
|
593 |
|
594 |
+
hugging_face_hub_is_offline = 0
|
595 |
+
|
596 |
+
if (
|
597 |
+
("HF_HUB_OFFLINE" in os.environ) and
|
598 |
+
(int(os.environ["HF_HUB_OFFLINE"]) == 1)
|
599 |
+
):
|
600 |
+
|
601 |
+
hugging_face_hub_is_offline = 1
|
602 |
+
|
603 |
+
if hugging_face_hub_is_offline == 0:
|
604 |
+
|
605 |
+
print ("Note: The Hugging Face cache directory does not automatically delete older data. Over time, it could eventually grow to use all the space on the drive it is on. You either need to manually clean out the folder occasionally or see Instructons.txt on how to not automatically update data once you have downloaded everything you need.")
|
606 |
+
|
607 |
|
|
|
608 |
|
609 |
try:
|
610 |
if (str(os.uname()).find("magicfixeseverything") >= 0):
|
|
|
620 |
auto_save_imagery = 0
|
621 |
show_messages_in_modal_on_page = 0
|
622 |
|
623 |
+
show_messages_in_command_prompt = 1
|
624 |
+
show_messages_in_modal_on_page = 1
|
625 |
+
|
626 |
+
if device == "cpu":
|
627 |
+
|
628 |
+
show_image_creation_progress_log = 1
|
629 |
+
|
630 |
ending_html = """
|
631 |
If you would like to download this app to run offline on a Windows computer that has a NVIDIA graphics card, click <a href=\"https://huggingface.co/spaces/magicfixeseverything/ai_image_creation/resolve/main/ai_image_creation.zip\">here</a> to download it.
|
632 |
|
|
|
829 |
|
830 |
|
831 |
|
832 |
+
default_use_denoising_start_in_base_model_when_using_refiner_is_selected = False
|
833 |
+
|
834 |
+
if default_use_denoising_start_in_base_model_when_using_refiner == 1:
|
835 |
+
|
836 |
+
default_use_denoising_start_in_base_model_when_using_refiner_is_selected = True
|
837 |
+
|
838 |
+
default_base_model_output_to_refiner_is_in_latent_space_is_selected = False
|
839 |
+
|
840 |
+
if default_base_model_output_to_refiner_is_in_latent_space == 1:
|
841 |
+
|
842 |
+
default_base_model_output_to_refiner_is_in_latent_space_is_selected = True
|
843 |
+
|
844 |
+
|
845 |
+
|
846 |
refiner_default_config_accordion_visible = True
|
847 |
|
848 |
if (
|
|
|
855 |
refiner_default_config_accordion_open = False
|
856 |
|
857 |
if (
|
858 |
+
(default_refiner_accordion_open == 1) or
|
859 |
+
(
|
860 |
+
(is_default_config == 1) and
|
861 |
+
(default_refiner_selected == 1)
|
862 |
+
)
|
863 |
):
|
864 |
|
865 |
refiner_default_config_accordion_open = True
|
|
|
878 |
refiner_online_config_accordion_open = False
|
879 |
|
880 |
if (
|
881 |
+
(default_refiner_accordion_open == 1) or
|
882 |
+
(
|
883 |
+
(is_default_config != 1) and
|
884 |
+
(default_refiner_selected == 1)
|
885 |
+
)
|
886 |
):
|
887 |
|
888 |
refiner_online_config_accordion_open = True
|
|
|
905 |
|
906 |
upscaler_accordion_open = False
|
907 |
|
908 |
+
if (
|
909 |
+
(default_upscaler_selected == 1) or
|
910 |
+
(default_upscaler_accordion_open == 1)
|
911 |
+
):
|
912 |
|
913 |
upscaler_accordion_open = True
|
914 |
|
|
|
955 |
|
956 |
|
957 |
|
|
|
|
|
|
|
|
|
958 |
last_model_configuration_name_value = ""
|
959 |
last_refiner_selected = ""
|
960 |
last_upscaler_selected = ""
|
961 |
|
962 |
|
963 |
|
964 |
+
if show_image_creation_progress_log == 1:
|
965 |
+
|
966 |
+
import time
|
967 |
+
|
968 |
+
|
969 |
+
|
970 |
+
current_progress_text = ""
|
971 |
+
current_actual_total_base_model_steps = ""
|
972 |
+
current_actual_total_refiner_steps = ""
|
973 |
+
current_actual_total_upscaler_steps = ""
|
974 |
+
|
975 |
+
|
976 |
+
|
977 |
default_base_model_choices_array = []
|
978 |
|
979 |
stored_model_configuration_names_object = {}
|
|
|
1076 |
def seed_not_valid(seed_num_str):
|
1077 |
try:
|
1078 |
seed_num = int(seed_num_str)
|
1079 |
+
if (seed_num > 0) and (seed_num <= default_seed_maximum):
|
1080 |
return False
|
1081 |
else:
|
1082 |
return True
|
|
|
1215 |
|
1216 |
|
1217 |
|
1218 |
+
#####################
|
1219 |
+
#
|
1220 |
+
# Callback Function for Base Model Progress
|
1221 |
+
#
|
1222 |
+
# Add the current step the generation is on in the base model to the web
|
1223 |
+
# interface.
|
1224 |
+
#
|
1225 |
+
#####################
|
1226 |
+
|
1227 |
+
def callback_function_for_base_model_progress(
|
1228 |
+
callback_pipe,
|
1229 |
+
callback_step_index,
|
1230 |
+
callback_timestep,
|
1231 |
+
callback_kwargs
|
1232 |
+
):
|
1233 |
+
|
1234 |
+
global current_progress_text
|
1235 |
+
|
1236 |
+
global current_base_model_generation_start_time
|
1237 |
+
|
1238 |
+
current_progress_text = "Base model steps complete... " + str(callback_step_index) + " of " + str(current_actual_total_base_model_steps)
|
1239 |
+
|
1240 |
+
if int(callback_step_index) == 0:
|
1241 |
+
|
1242 |
+
current_base_model_generation_start_time = time.time()
|
1243 |
+
|
1244 |
+
if int(callback_step_index) > 0:
|
1245 |
+
|
1246 |
+
seconds_per_step = ((time.time() - current_base_model_generation_start_time) / int(callback_step_index))
|
1247 |
+
|
1248 |
+
(
|
1249 |
+
time_per_step_hours,
|
1250 |
+
time_per_step_minutes,
|
1251 |
+
time_per_step_seconds
|
1252 |
+
) = convert_seconds(seconds_per_step)
|
1253 |
+
|
1254 |
+
if time_per_step_hours > 0:
|
1255 |
+
|
1256 |
+
hours_text = "hr"
|
1257 |
+
|
1258 |
+
if time_per_step_hours > 1:
|
1259 |
+
|
1260 |
+
hours_text = "hrs"
|
1261 |
+
|
1262 |
+
nice_time_per_step = str(int(time_per_step_hours)) + " " + hours_text + ". " + str(int(time_per_step_minutes)) + " min. " + str(round(generation_partial_seconds, 1)) + " sec."
|
1263 |
+
|
1264 |
+
elif time_per_step_minutes > 0:
|
1265 |
+
|
1266 |
+
nice_time_per_step = str(int(time_per_step_minutes)) + " min. " + str(round(generation_partial_seconds, 1)) + " sec."
|
1267 |
+
|
1268 |
+
else:
|
1269 |
+
|
1270 |
+
nice_time_per_step = str(round(time_per_step_seconds, 2)) + " sec."
|
1271 |
+
|
1272 |
+
current_progress_text += "\n" + nice_time_per_step + " per step"
|
1273 |
+
|
1274 |
+
return {}
|
1275 |
+
|
1276 |
+
|
1277 |
+
|
1278 |
+
|
1279 |
+
|
1280 |
+
|
1281 |
+
|
1282 |
+
#####################
|
1283 |
+
#
|
1284 |
+
# Callback Function for Refiner Progress
|
1285 |
+
#
|
1286 |
+
# Add the current step the generation is on in the refiner to the web
|
1287 |
+
# interface.
|
1288 |
+
#
|
1289 |
+
#####################
|
1290 |
+
|
1291 |
+
def callback_function_for_refiner_progress(
|
1292 |
+
callback_pipe,
|
1293 |
+
callback_step_index,
|
1294 |
+
callback_timestep,
|
1295 |
+
callback_kwargs
|
1296 |
+
):
|
1297 |
+
|
1298 |
+
global current_progress_text
|
1299 |
+
|
1300 |
+
global current_refiner_generation_start_time
|
1301 |
+
|
1302 |
+
current_progress_text = "Refiner steps complete... " + str(callback_step_index) + " of " + str(current_actual_total_refiner_steps)
|
1303 |
+
|
1304 |
+
if int(callback_step_index) == 0:
|
1305 |
+
|
1306 |
+
current_refiner_generation_start_time = time.time()
|
1307 |
+
|
1308 |
+
if int(callback_step_index) > 0:
|
1309 |
+
|
1310 |
+
seconds_per_step = ((time.time() - current_refiner_generation_start_time) / int(callback_step_index))
|
1311 |
+
|
1312 |
+
(
|
1313 |
+
time_per_step_hours,
|
1314 |
+
time_per_step_minutes,
|
1315 |
+
time_per_step_seconds
|
1316 |
+
) = convert_seconds(seconds_per_step)
|
1317 |
+
|
1318 |
+
if time_per_step_hours > 0:
|
1319 |
+
|
1320 |
+
hours_text = "hr"
|
1321 |
+
|
1322 |
+
if time_per_step_hours > 1:
|
1323 |
+
|
1324 |
+
hours_text = "hrs"
|
1325 |
+
|
1326 |
+
nice_time_per_step = str(int(time_per_step_hours)) + " " + hours_text + ". " + str(int(time_per_step_minutes)) + " min. " + str(round(generation_partial_seconds, 1)) + " sec."
|
1327 |
+
|
1328 |
+
elif time_per_step_minutes > 0:
|
1329 |
+
|
1330 |
+
nice_time_per_step = str(int(time_per_step_minutes)) + " min. " + str(round(generation_partial_seconds, 1)) + " sec."
|
1331 |
+
|
1332 |
+
else:
|
1333 |
+
|
1334 |
+
nice_time_per_step = str(round(time_per_step_seconds, 2)) + " sec."
|
1335 |
+
|
1336 |
+
current_progress_text += "\n" + nice_time_per_step + " per step"
|
1337 |
+
|
1338 |
+
return {}
|
1339 |
+
|
1340 |
+
|
1341 |
+
|
1342 |
+
|
1343 |
+
|
1344 |
+
|
1345 |
+
|
1346 |
+
#####################
|
1347 |
+
#
|
1348 |
+
# Update Log Progress
|
1349 |
+
#
|
1350 |
+
# This is called every second when "show_image_creation_progress_log" is
|
1351 |
+
# set to 1. It displays the latest value in "current_progress_text".
|
1352 |
+
#
|
1353 |
+
#####################
|
1354 |
+
|
1355 |
+
def update_log_progress ():
|
1356 |
+
|
1357 |
+
global current_progress_text
|
1358 |
+
|
1359 |
+
log_text_field_update = gr.Textbox(
|
1360 |
+
value = current_progress_text
|
1361 |
+
)
|
1362 |
+
|
1363 |
+
return {
|
1364 |
+
log_text_field: log_text_field_update
|
1365 |
+
}
|
1366 |
+
|
1367 |
+
|
1368 |
+
|
1369 |
+
|
1370 |
+
|
1371 |
+
|
1372 |
+
|
1373 |
+
#####################
|
1374 |
+
#
|
1375 |
+
# Before Create Image Function
|
1376 |
+
#
|
1377 |
+
# This is loaded before the image creation begins.
|
1378 |
+
#
|
1379 |
+
#####################
|
1380 |
+
|
1381 |
+
def before_create_image_function ():
|
1382 |
+
|
1383 |
+
output_text_field_update = gr.Textbox(
|
1384 |
+
visible = False
|
1385 |
+
)
|
1386 |
+
|
1387 |
+
log_text_field_update = gr.Textbox(
|
1388 |
+
value = "",
|
1389 |
+
visible = True,
|
1390 |
+
every = 1
|
1391 |
+
)
|
1392 |
+
|
1393 |
+
generate_image_btn_update = gr.Button(
|
1394 |
+
value = "Generating...",
|
1395 |
+
variant = "secondary",
|
1396 |
+
interactive = False
|
1397 |
+
)
|
1398 |
+
|
1399 |
+
return {
|
1400 |
+
output_text_field: output_text_field_update,
|
1401 |
+
log_text_field: log_text_field_update,
|
1402 |
+
generate_image_btn: generate_image_btn_update
|
1403 |
+
}
|
1404 |
+
|
1405 |
+
|
1406 |
+
|
1407 |
+
|
1408 |
+
|
1409 |
+
|
1410 |
+
|
1411 |
+
#####################
|
1412 |
+
#
|
1413 |
+
# After Create Image Function
|
1414 |
+
#
|
1415 |
+
# This is loaded once image creation has completed.
|
1416 |
+
#
|
1417 |
+
#####################
|
1418 |
+
|
1419 |
+
def after_create_image_function ():
|
1420 |
+
|
1421 |
+
output_text_field_update = gr.Textbox(
|
1422 |
+
visible = True
|
1423 |
+
)
|
1424 |
+
|
1425 |
+
log_text_field_update = gr.Textbox(
|
1426 |
+
value = "",
|
1427 |
+
visible = False,
|
1428 |
+
every = None
|
1429 |
+
)
|
1430 |
+
|
1431 |
+
generate_image_btn_update = gr.Button(
|
1432 |
+
value = "Generate",
|
1433 |
+
variant = "primary",
|
1434 |
+
interactive = True
|
1435 |
+
)
|
1436 |
+
|
1437 |
+
return {
|
1438 |
+
output_text_field: output_text_field_update,
|
1439 |
+
log_text_field: log_text_field_update,
|
1440 |
+
generate_image_btn: generate_image_btn_update
|
1441 |
+
}
|
1442 |
+
|
1443 |
+
|
1444 |
+
|
1445 |
+
|
1446 |
+
|
1447 |
+
|
1448 |
+
|
1449 |
+
|
1450 |
#####################
|
1451 |
#
|
1452 |
# Create Image Function
|
|
|
1467 |
base_model_num_inference_steps_field_for_sdxl_turbo,
|
1468 |
actual_seed,
|
1469 |
|
1470 |
+
refining_selection_default_config_field_value,
|
1471 |
refining_selection_online_config_normal_field_value,
|
1472 |
refining_selection_online_config_automatically_selected_field_value,
|
1473 |
|
1474 |
+
refining_denoise_start_for_default_config_field_value,
|
1475 |
refining_use_denoising_start_in_base_model_when_using_refiner_field_value,
|
1476 |
refining_base_model_output_to_refiner_is_in_latent_space_field_value,
|
1477 |
|
|
|
1482 |
upscaling_num_inference_steps
|
1483 |
):
|
1484 |
|
1485 |
+
global current_progress_text
|
1486 |
+
global current_actual_total_base_model_steps
|
1487 |
+
global current_actual_total_refiner_steps
|
1488 |
+
|
1489 |
+
current_progress_text = ""
|
1490 |
+
current_actual_total_base_model_steps = 0
|
1491 |
+
current_actual_total_refiner_steps = 0
|
1492 |
+
current_actual_total_upscaler_steps = 0
|
1493 |
+
|
1494 |
+
refining_selection_default_config_field_value = numerical_bool(refining_selection_default_config_field_value)
|
1495 |
refining_selection_online_config_normal_field_value = numerical_bool(refining_selection_online_config_normal_field_value)
|
1496 |
refining_selection_online_config_automatically_selected_field_value = numerical_bool(refining_selection_online_config_automatically_selected_field_value)
|
1497 |
|
|
|
1499 |
refining_use_denoising_start_in_base_model_when_using_refiner_field_value = numerical_bool(refining_use_denoising_start_in_base_model_when_using_refiner_field_value)
|
1500 |
refining_base_model_output_to_refiner_is_in_latent_space_field_value = numerical_bool(refining_base_model_output_to_refiner_is_in_latent_space_field_value)
|
1501 |
|
1502 |
+
|
1503 |
+
|
1504 |
use_upscaler = numerical_bool(upscaling_selection_field_value)
|
1505 |
|
1506 |
|
|
|
1510 |
|
1511 |
|
1512 |
|
1513 |
+
current_actual_total_base_model_steps = base_model_num_inference_steps
|
1514 |
+
current_actual_total_upscaler_steps = upscaling_num_inference_steps
|
1515 |
+
|
1516 |
+
|
1517 |
+
|
1518 |
+
is_default_config_state = 0
|
1519 |
|
1520 |
if model_configuration_name_value in default_model_configuration_object:
|
1521 |
|
1522 |
+
is_default_config_state = 1
|
1523 |
|
|
|
1524 |
|
1525 |
|
1526 |
+
use_refiner = 0
|
1527 |
|
1528 |
if (
|
1529 |
(
|
1530 |
+
(is_default_config_state == 1) and
|
1531 |
+
refining_selection_default_config_field_value
|
1532 |
) or (
|
1533 |
+
(is_default_config_state != 1) and
|
1534 |
+
(
|
1535 |
+
(
|
1536 |
+
(model_configuration_name_value not in model_configuration_force_refiner_object) and
|
1537 |
+
refining_selection_online_config_normal_field_value
|
1538 |
+
) or (
|
1539 |
+
(model_configuration_name_value in model_configuration_force_refiner_object) and
|
1540 |
+
refining_selection_online_config_automatically_selected_field_value
|
1541 |
+
)
|
1542 |
+
)
|
1543 |
)
|
1544 |
):
|
1545 |
|
|
|
1551 |
|
1552 |
negative_prompt_text = ""
|
1553 |
base_model_num_inference_steps = base_model_num_inference_steps_field_for_sdxl_turbo
|
1554 |
+
current_actual_total_base_model_steps = base_model_num_inference_steps
|
1555 |
guidance_scale = 0
|
1556 |
|
1557 |
|
|
|
1570 |
(model_configuration_name_value != last_model_configuration_name_value)
|
1571 |
):
|
1572 |
|
1573 |
+
current_progress_text = "Base model is loading."
|
1574 |
+
show_message(current_progress_text)
|
1575 |
|
1576 |
if (last_model_configuration_name_value != ""):
|
1577 |
|
1578 |
+
# del pipe
|
1579 |
+
if 'pipe' in globals():
|
1580 |
+
del pipe
|
1581 |
|
1582 |
if 'refiner' in globals():
|
1583 |
del refiner
|
|
|
1662 |
|
1663 |
if use_refiner == 1:
|
1664 |
|
1665 |
+
current_progress_text = "Refiner is loading."
|
1666 |
+
show_message(current_progress_text)
|
1667 |
|
1668 |
refiner_kwargs = {
|
1669 |
"use_safetensors": True
|
|
|
1679 |
refiner_kwargs["cache_dir"] = hugging_face_cache_dir
|
1680 |
|
1681 |
refiner = DiffusionPipeline.from_pretrained(
|
1682 |
+
hugging_face_refiner_partial_path,
|
1683 |
**refiner_kwargs
|
1684 |
)
|
1685 |
|
|
|
1714 |
|
1715 |
if use_upscaler == 1:
|
1716 |
|
1717 |
+
current_progress_text = "Upscaler is loading."
|
1718 |
+
show_message(current_progress_text)
|
1719 |
|
1720 |
upscaler_kwargs = {
|
1721 |
"use_safetensors": True
|
|
|
1723 |
|
1724 |
if device == "cuda":
|
1725 |
|
|
|
1726 |
upscaler_kwargs["torch_dtype"] = torch.float16
|
1727 |
|
1728 |
if use_custom_hugging_face_cache_dir == 1:
|
|
|
1730 |
upscaler_kwargs["cache_dir"] = hugging_face_cache_dir
|
1731 |
|
1732 |
upscaler = DiffusionPipeline.from_pretrained(
|
1733 |
+
hugging_face_upscaler_partial_path,
|
1734 |
**upscaler_kwargs
|
1735 |
)
|
1736 |
|
|
|
1852 |
|
1853 |
|
1854 |
|
1855 |
+
if show_image_creation_progress_log == 1:
|
1856 |
+
|
1857 |
+
callback_to_do_for_base_model_progress = callback_function_for_base_model_progress
|
1858 |
+
callback_to_do_for_refiner_progress = callback_function_for_refiner_progress
|
1859 |
+
|
1860 |
+
else:
|
1861 |
+
|
1862 |
+
callback_to_do_for_base_model_progress = None
|
1863 |
+
callback_to_do_for_refiner_progress = None
|
1864 |
+
|
1865 |
+
|
1866 |
+
|
1867 |
if model_configuration_name_value.find("default") < 0:
|
1868 |
|
1869 |
|
|
|
1909 |
|
1910 |
upscaling_num_inference_steps = 5
|
1911 |
|
1912 |
+
current_actual_total_upscaler_steps = upscaling_num_inference_steps
|
1913 |
+
|
1914 |
+
|
1915 |
+
|
1916 |
+
if show_messages_in_command_prompt == 1:
|
1917 |
+
print ("Initial image creation has begun.");
|
1918 |
+
|
1919 |
+
if show_image_creation_progress_log == 1:
|
1920 |
+
current_progress_text = "Initial image creation has begun."
|
1921 |
+
|
1922 |
+
int_image = pipe(
|
1923 |
+
prompt,
|
1924 |
+
prompt_2=prompt_2,
|
1925 |
+
negative_prompt=negative_prompt,
|
1926 |
+
negative_prompt_2=negative_prompt_2,
|
1927 |
+
num_inference_steps=steps,
|
1928 |
+
height=height,
|
1929 |
+
width=width,
|
1930 |
+
guidance_scale=scale,
|
1931 |
+
num_images_per_prompt=1,
|
1932 |
+
generator=generator,
|
1933 |
+
output_type="latent",
|
1934 |
+
callback_on_step_end=callback_to_do_for_base_model_progress
|
1935 |
+
).images
|
1936 |
+
|
1937 |
+
if show_messages_in_command_prompt == 1:
|
1938 |
+
print ("Refiner steps...");
|
1939 |
+
|
1940 |
+
if show_image_creation_progress_log == 1:
|
1941 |
+
current_progress_text = "Refining is beginning."
|
1942 |
|
1943 |
+
current_actual_total_refiner_steps = int(int(n_steps) * float(high_noise_frac))
|
1944 |
+
|
1945 |
+
nice_refiner_denoise_start = str(refining_denoise_start_for_online_config_field_value)
|
1946 |
+
|
1947 |
+
refiner_info_for_info_about_prompt_lines_array = [
|
1948 |
+
"Refiner? Yes"
|
1949 |
+
"Refiner denoise start %: " + nice_refiner_denoise_start,
|
1950 |
+
"Refiner number of iterations: " + str(refining_number_of_iterations_for_online_config_field_value),
|
1951 |
+
"Actual Refining Steps: " + str(current_actual_total_refiner_steps)
|
1952 |
+
]
|
1953 |
+
|
1954 |
+
image = refiner(
|
1955 |
+
prompt=prompt,
|
1956 |
+
prompt_2=prompt_2,
|
1957 |
+
negative_prompt=negative_prompt,
|
1958 |
+
negative_prompt_2=negative_prompt_2,
|
1959 |
+
image=int_image,
|
1960 |
+
num_inference_steps=n_steps,
|
1961 |
+
denoising_start=high_noise_frac,
|
1962 |
+
callback_on_step_end=callback_to_do_for_refiner_progress
|
1963 |
+
).images[0]
|
1964 |
|
|
|
|
|
1965 |
if upscaling == 'Yes':
|
1966 |
+
|
1967 |
+
if show_messages_in_command_prompt == 1:
|
1968 |
+
print ("Upscaler steps...");
|
1969 |
+
|
1970 |
+
if show_image_creation_progress_log == 1:
|
1971 |
+
current_progress_text = "Upscaling in progress.\n(step by step progress not displayed)"
|
1972 |
|
1973 |
# Changed
|
1974 |
#
|
1975 |
# num_inference_steps=15
|
1976 |
#
|
1977 |
|
1978 |
+
upscaled = upscaler(
|
1979 |
+
prompt=prompt,
|
1980 |
+
negative_prompt=negative_prompt,
|
1981 |
+
image=image,
|
1982 |
+
num_inference_steps=upscaling_num_inference_steps,
|
1983 |
+
guidance_scale=0
|
1984 |
+
).images[0]
|
1985 |
+
|
1986 |
if device == "cuda":
|
1987 |
torch.cuda.empty_cache()
|
1988 |
|
|
|
|
|
|
|
|
|
|
|
1989 |
image_to_return = upscaled
|
1990 |
|
1991 |
else:
|
1992 |
+
|
|
|
|
|
1993 |
if device == "cuda":
|
1994 |
torch.cuda.empty_cache()
|
1995 |
|
|
|
|
|
|
|
|
|
1996 |
image_to_return = image
|
1997 |
|
1998 |
|
|
|
2008 |
|
2009 |
|
2010 |
if upscale == "Yes":
|
2011 |
+
|
2012 |
+
if show_messages_in_command_prompt == 1:
|
2013 |
+
print ("Initial image creation has begun.");
|
2014 |
+
|
2015 |
+
if show_image_creation_progress_log == 1:
|
2016 |
+
current_progress_text = "Initial image creation has begun."
|
2017 |
+
|
2018 |
+
int_image = pipe(
|
2019 |
+
Prompt,
|
2020 |
+
negative_prompt=negative_prompt,
|
2021 |
+
height=height,
|
2022 |
+
width=width,
|
2023 |
+
num_inference_steps=steps,
|
2024 |
+
guidance_scale=scale,
|
2025 |
+
callback_on_step_end=callback_to_do_for_base_model_progress
|
2026 |
+
).images
|
2027 |
+
|
2028 |
+
if show_messages_in_command_prompt == 1:
|
2029 |
+
print ("Refiner steps...");
|
2030 |
+
|
2031 |
+
if show_image_creation_progress_log == 1:
|
2032 |
+
current_progress_text = "Refining is beginning."
|
2033 |
+
|
2034 |
+
default_steps_in_diffusers = 50
|
2035 |
+
|
2036 |
+
current_actual_total_refiner_steps = int(default_steps_in_diffusers * float(high_noise_frac))
|
2037 |
+
|
2038 |
+
refiner_info_for_info_about_prompt_lines_array = [
|
2039 |
+
"Refiner? Yes"
|
2040 |
+
"Refiner denoise start %: " + nice_refiner_denoise_start,
|
2041 |
+
"Refiner number of iterations: " + str(current_actual_total_refiner_steps),
|
2042 |
+
"Actual Refining Steps: " + str(current_actual_total_refiner_steps)
|
2043 |
+
]
|
2044 |
+
|
2045 |
+
image = refiner(
|
2046 |
+
Prompt,
|
2047 |
+
negative_prompt=negative_prompt,
|
2048 |
+
image=int_image,
|
2049 |
+
num_inference_steps=default_steps_in_diffusers,
|
2050 |
+
denoising_start=high_noise_frac,
|
2051 |
+
callback_on_step_end=callback_to_do_for_refiner_progress
|
2052 |
+
).images[0]
|
2053 |
+
|
2054 |
else:
|
2055 |
+
|
2056 |
+
if show_messages_in_command_prompt == 1:
|
2057 |
+
print ("Image creation has begun.");
|
2058 |
+
|
2059 |
+
if show_image_creation_progress_log == 1:
|
2060 |
+
current_progress_text = "Image creation has begun."
|
2061 |
+
|
2062 |
+
image = pipe(
|
2063 |
+
Prompt,
|
2064 |
+
negative_prompt=negative_prompt,
|
2065 |
+
height=height,
|
2066 |
+
width=width,
|
2067 |
+
num_inference_steps=steps,
|
2068 |
+
guidance_scale=scale,
|
2069 |
+
callback_on_step_end=callback_to_do_for_base_model_progress
|
2070 |
+
).images[0]
|
2071 |
|
2072 |
|
2073 |
|
|
|
2085 |
#
|
2086 |
#
|
2087 |
|
2088 |
+
if use_refiner == 1:
|
2089 |
|
2090 |
+
if refining_use_denoising_start_in_base_model_when_using_refiner_field_value == 1:
|
2091 |
|
2092 |
+
denoising_end = refining_denoise_start_for_default_config_field_value
|
2093 |
|
2094 |
+
current_actual_total_base_model_steps = int(base_model_num_inference_steps * float(refining_denoise_start_for_default_config_field_value))
|
2095 |
|
2096 |
+
else:
|
2097 |
|
2098 |
+
denoising_end = None
|
2099 |
+
|
2100 |
+
output_type_before_refiner = "pil"
|
2101 |
|
2102 |
+
if refining_base_model_output_to_refiner_is_in_latent_space_field_value == 1:
|
2103 |
+
|
2104 |
+
output_type_before_refiner = "latent"
|
2105 |
+
|
2106 |
+
current_actual_total_refiner_steps = (base_model_num_inference_steps - int(base_model_num_inference_steps * float(refining_denoise_start_for_default_config_field_value)))
|
2107 |
+
|
2108 |
+
refiner_info_for_info_about_prompt_lines_array = [
|
2109 |
+
"Refiner? Yes"
|
2110 |
+
]
|
2111 |
+
|
2112 |
+
nice_refiner_denoise_start = str(refining_denoise_start_for_online_config_field_value)
|
2113 |
+
|
2114 |
+
if refining_use_denoising_start_in_base_model_when_using_refiner_field_value == 1:
|
2115 |
+
|
2116 |
+
refiner_info_for_info_about_prompt_lines_array.extend([
|
2117 |
+
"Set \"denoising_end\" in base model generation? Yes",
|
2118 |
+
"Base model denoise end %: " + nice_refiner_denoise_start,
|
2119 |
+
"Actual Base Model Steps: " + str(current_actual_total_base_model_steps)
|
2120 |
+
])
|
2121 |
+
|
2122 |
+
else:
|
2123 |
+
|
2124 |
+
refiner_info_for_info_about_prompt_lines_array.extend([
|
2125 |
+
"Set \"denoising_end\" in base model generation? No",
|
2126 |
+
])
|
2127 |
+
|
2128 |
+
refiner_info_for_info_about_prompt_lines_array.extend([
|
2129 |
+
"Refiner denoise start %: " + nice_refiner_denoise_start,
|
2130 |
+
"Actual Refining Steps: " + str(current_actual_total_refiner_steps)
|
2131 |
+
])
|
2132 |
+
|
2133 |
+
if refining_base_model_output_to_refiner_is_in_latent_space_field_value == 1:
|
2134 |
+
|
2135 |
+
refiner_info_for_info_about_prompt_lines_array.extend([
|
2136 |
+
"Base model output in latent space before refining? Yes",
|
2137 |
+
])
|
2138 |
+
|
2139 |
+
else:
|
2140 |
+
|
2141 |
+
refiner_info_for_info_about_prompt_lines_array.extend([
|
2142 |
+
"Base model output in latent space before refining? No",
|
2143 |
+
])
|
2144 |
+
|
2145 |
+
if use_upscaler == 1:
|
2146 |
+
|
2147 |
+
if show_messages_in_command_prompt == 1:
|
2148 |
+
print ("Will create initial image, then refine and then upscale.");
|
2149 |
print ("Initial image steps...");
|
2150 |
|
2151 |
+
if show_image_creation_progress_log == 1:
|
2152 |
+
current_progress_text = "Initial image creation has begun."
|
2153 |
+
|
2154 |
intitial_image = pipe(
|
2155 |
prompt = prompt_text,
|
2156 |
negative_prompt = negative_prompt_text,
|
|
|
2160 |
guidance_scale = guidance_scale,
|
2161 |
num_images_per_prompt = 1,
|
2162 |
generator = generator,
|
2163 |
+
denoising_end = denoising_end,
|
2164 |
+
output_type = output_type_before_refiner,
|
2165 |
+
callback_on_step_end = callback_to_do_for_base_model_progress
|
2166 |
).images
|
2167 |
|
2168 |
if show_messages_in_command_prompt == 1:
|
|
|
2169 |
print ("Refiner steps...");
|
2170 |
|
2171 |
+
if show_image_creation_progress_log == 1:
|
2172 |
+
current_progress_text = "Refining is beginning."
|
2173 |
+
|
2174 |
refined_image = refiner(
|
2175 |
prompt = prompt_text,
|
2176 |
negative_prompt = negative_prompt_text,
|
2177 |
image = intitial_image,
|
2178 |
num_inference_steps = base_model_num_inference_steps,
|
2179 |
+
denoising_start = refining_denoise_start_for_default_config_field_value,
|
2180 |
+
output_type = "pil",
|
2181 |
+
callback_on_step_end = callback_to_do_for_refiner_progress
|
2182 |
).images
|
2183 |
|
2184 |
if show_messages_in_command_prompt == 1:
|
|
|
2185 |
print ("Upscaler steps...");
|
2186 |
|
2187 |
+
if show_image_creation_progress_log == 1:
|
2188 |
+
current_progress_text = "Upscaling in progress.\n(step by step progress not displayed)"
|
2189 |
+
|
2190 |
upscaled_image = upscaler(
|
2191 |
prompt = prompt_text,
|
2192 |
negative_prompt = negative_prompt_text,
|
|
|
2202 |
|
2203 |
else:
|
2204 |
|
|
|
|
|
2205 |
if show_messages_in_command_prompt == 1:
|
2206 |
+
print ("Will create initial image and then refine.");
|
2207 |
print ("Initial image steps...");
|
2208 |
|
2209 |
+
if show_image_creation_progress_log == 1:
|
2210 |
+
current_progress_text = "Initial image creation has begun."
|
2211 |
+
|
2212 |
intitial_image = pipe(
|
2213 |
prompt = prompt_text,
|
2214 |
negative_prompt = negative_prompt_text,
|
2215 |
width = image_width,
|
2216 |
height = image_height,
|
|
|
|
|
|
|
2217 |
num_inference_steps = base_model_num_inference_steps,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2218 |
guidance_scale = guidance_scale,
|
2219 |
num_images_per_prompt = 1,
|
2220 |
generator = generator,
|
2221 |
+
denoising_end = denoising_end,
|
2222 |
+
output_type = output_type_before_refiner,
|
2223 |
+
callback_on_step_end = callback_to_do_for_base_model_progress
|
2224 |
).images
|
2225 |
|
2226 |
if show_messages_in_command_prompt == 1:
|
|
|
2227 |
print ("Refiner steps...");
|
2228 |
|
2229 |
+
if show_image_creation_progress_log == 1:
|
2230 |
+
current_progress_text = "Refining is beginning."
|
2231 |
+
|
2232 |
refined_image = refiner(
|
2233 |
prompt = prompt_text,
|
2234 |
negative_prompt = negative_prompt_text,
|
2235 |
image = intitial_image,
|
2236 |
+
num_inference_steps = base_model_num_inference_steps,
|
2237 |
+
denoising_start = refining_denoise_start_for_default_config_field_value,
|
2238 |
+
callback_on_step_end = callback_to_do_for_refiner_progress
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2239 |
).images[0]
|
2240 |
|
2241 |
if device == "cuda":
|
|
|
2247 |
|
2248 |
if use_upscaler == 1:
|
2249 |
|
|
|
|
|
2250 |
if show_messages_in_command_prompt == 1:
|
2251 |
+
print ("Will create initial image and then upscale.");
|
2252 |
print ("Initial image steps...");
|
2253 |
|
2254 |
+
if show_image_creation_progress_log == 1:
|
2255 |
+
current_progress_text = "Initial image creation has begun."
|
2256 |
+
|
2257 |
intitial_image = pipe(
|
2258 |
prompt = prompt_text,
|
2259 |
negative_prompt = negative_prompt_text,
|
|
|
2263 |
guidance_scale = guidance_scale,
|
2264 |
num_images_per_prompt = 1,
|
2265 |
generator = generator,
|
2266 |
+
output_type = "pil",
|
2267 |
+
callback_on_step_end = callback_to_do_for_base_model_progress
|
2268 |
).images
|
2269 |
|
2270 |
if show_messages_in_command_prompt == 1:
|
|
|
2271 |
print ("Upscaler steps...");
|
2272 |
|
2273 |
+
if show_image_creation_progress_log == 1:
|
2274 |
+
current_progress_text = "Upscaling in progress.\n(step by step progress not displayed)"
|
2275 |
+
|
2276 |
upscaled_image = upscaler(
|
2277 |
prompt = prompt_text,
|
2278 |
negative_prompt = negative_prompt_text,
|
|
|
2288 |
|
2289 |
else:
|
2290 |
|
|
|
|
|
2291 |
if show_messages_in_command_prompt == 1:
|
2292 |
+
print ("Will create image (no refining or upscaling).");
|
2293 |
print ("Image steps...");
|
2294 |
|
2295 |
+
if show_image_creation_progress_log == 1:
|
2296 |
+
current_progress_text = "Image creation has begun."
|
2297 |
+
|
2298 |
image = pipe(
|
2299 |
prompt = prompt_text,
|
2300 |
negative_prompt = negative_prompt_text,
|
|
|
2303 |
num_inference_steps = base_model_num_inference_steps,
|
2304 |
guidance_scale = guidance_scale,
|
2305 |
num_images_per_prompt = 1,
|
2306 |
+
generator = generator,
|
2307 |
+
callback_on_step_end = callback_to_do_for_base_model_progress
|
2308 |
).images[0]
|
2309 |
|
2310 |
if device == "cuda":
|
|
|
2364 |
])
|
2365 |
|
2366 |
info_about_prompt_lines_array.extend([
|
2367 |
+
"Steps: " + str(base_model_num_inference_steps),
|
2368 |
"Model: " + nice_model_name
|
2369 |
])
|
2370 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2371 |
|
|
|
|
|
|
|
|
|
2372 |
|
2373 |
+
if use_refiner == 1:
|
2374 |
|
2375 |
+
# Default Configuration
|
2376 |
|
2377 |
+
info_about_prompt_lines_array.extend(refiner_info_for_info_about_prompt_lines_array)
|
|
|
|
|
2378 |
|
2379 |
if use_upscaler == 1:
|
2380 |
|
|
|
2548 |
# Cancel Image Processing
|
2549 |
#
|
2550 |
# When running on Windows, this is an attempt at closing the command
|
2551 |
+
# prompt from the web display. It's really not worth having this. You can
|
2552 |
+
# just close the prompt. I would like a nice way to cancel image
|
2553 |
# creation, but couldn't figure that out.
|
2554 |
#
|
2555 |
#####################
|
|
|
2695 |
|
2696 |
|
2697 |
|
2698 |
+
is_default_config_state = 0
|
2699 |
|
2700 |
if model_configuration_name_value in default_model_configuration_object:
|
2701 |
|
2702 |
+
is_default_config_state = 1
|
2703 |
|
2704 |
negative_prompt_field_visibility = True
|
2705 |
negative_prompt_for_sdxl_turbo_field_visibility = False
|
|
|
2746 |
refiner_default_config_accordion_visibility = False
|
2747 |
refiner_online_config_accordion_visibility = True
|
2748 |
|
2749 |
+
if is_default_config_state == 1:
|
2750 |
|
2751 |
refiner_default_config_accordion_visibility = True
|
2752 |
refiner_online_config_accordion_visibility = False
|
|
|
2765 |
refining_selection_online_config_normal_field_visibility = False
|
2766 |
refining_selection_online_config_automatically_selected_field_visibility = True
|
2767 |
|
2768 |
+
refining_number_of_iterations_for_online_config_field_visibility = False
|
2769 |
+
|
2770 |
+
if model_configuration_name_value in model_configuration_include_refiner_number_of_steps_object:
|
2771 |
+
|
2772 |
+
refining_number_of_iterations_for_online_config_field_visibility = True
|
2773 |
+
|
2774 |
|
2775 |
|
2776 |
refiner_default_config_accordion_update = gr.Accordion(
|
|
|
2793 |
visible = refining_selection_online_config_automatically_selected_field_visibility
|
2794 |
)
|
2795 |
|
2796 |
+
refining_number_of_iterations_for_online_config_field_update = gr.Radio(
|
2797 |
+
visible = refining_number_of_iterations_for_online_config_field_visibility
|
2798 |
+
)
|
2799 |
+
|
2800 |
|
2801 |
|
2802 |
return {
|
|
|
2811 |
refiner_online_config_accordion: refiner_online_config_accordion_update,
|
2812 |
refining_selection_automatically_selected_message_field: refining_selection_automatically_selected_message_field_update,
|
2813 |
refining_selection_online_config_normal_field: refining_selection_online_config_normal_field_update,
|
2814 |
+
refining_selection_online_config_automatically_selected_field: refining_selection_online_config_automatically_selected_field_update,
|
2815 |
+
|
2816 |
+
refining_number_of_iterations_for_online_config_field: refining_number_of_iterations_for_online_config_field_update
|
2817 |
|
2818 |
}
|
2819 |
|
|
|
2849 |
|
2850 |
model_configuration_name_value = base_model_object_of_model_configuration_arrays[base_model_field_value][model_configuration_field_index]
|
2851 |
|
2852 |
+
is_default_config_state = 0
|
2853 |
|
2854 |
if model_configuration_name_value in default_model_configuration_object:
|
2855 |
|
2856 |
+
is_default_config_state = 1
|
2857 |
|
2858 |
refining_selection_default_config_field_value = numerical_bool(refining_selection_default_config_field_value)
|
2859 |
refining_selection_online_config_normal_field_value = numerical_bool(refining_selection_online_config_normal_field_value)
|
|
|
2864 |
|
2865 |
if (
|
2866 |
(
|
2867 |
+
(is_default_config_state == 1) and
|
2868 |
+
refining_selection_default_config_field_value
|
2869 |
) or (
|
2870 |
+
(is_default_config_state != 1) and
|
2871 |
+
(
|
2872 |
+
(
|
2873 |
+
(model_configuration_name_value not in model_configuration_force_refiner_object) and
|
2874 |
+
refining_selection_online_config_normal_field_value
|
2875 |
+
) or (
|
2876 |
+
(model_configuration_name_value in model_configuration_force_refiner_object) and
|
2877 |
+
refining_selection_online_config_automatically_selected_field_value
|
2878 |
+
)
|
2879 |
+
)
|
2880 |
)
|
2881 |
):
|
2882 |
|
|
|
2935 |
# Hide border when yield is used:
|
2936 |
# https://github.com/gradio-app/gradio/issues/5479
|
2937 |
# .generating {border: none !important;}
|
2938 |
+
#
|
2939 |
+
# Remove orange border for generation progress.
|
2940 |
+
# #generation_progress_id div {border: none;}
|
2941 |
+
|
2942 |
+
css_to_use = "footer{display:none !important}"
|
2943 |
+
|
2944 |
+
if show_image_creation_progress_log == 1:
|
2945 |
+
|
2946 |
+
css_to_use += "#generation_progress_id div {border: none;}"
|
2947 |
|
2948 |
with gr.Blocks(
|
2949 |
title = "AI Image Creation",
|
2950 |
+
css = css_to_use,
|
2951 |
theme = gr.themes.Default(
|
2952 |
spacing_size = gr.themes.sizes.spacing_md,
|
2953 |
# spacing_size = gr.themes.sizes.spacing_sm,
|
2954 |
radius_size = gr.themes.sizes.radius_none
|
2955 |
+
),
|
2956 |
+
analytics_enabled = False
|
2957 |
) as sd_interface:
|
2958 |
|
2959 |
gr.Markdown(opening_html)
|
|
|
2996 |
):
|
2997 |
|
2998 |
with gr.Accordion(
|
2999 |
+
label = "Refiner (Default configuration)",
|
3000 |
elem_id = "refiner_default_config_accordion_id",
|
3001 |
open = refiner_default_config_accordion_open,
|
3002 |
visible = refiner_default_config_accordion_visible
|
|
|
3005 |
#
|
3006 |
#
|
3007 |
#
|
3008 |
+
# Refiner (Default configuration)
|
3009 |
#
|
3010 |
#
|
3011 |
#
|
|
|
3025 |
|
3026 |
with gr.Row():
|
3027 |
|
3028 |
+
refining_denoise_start_for_default_config_field = gr.Slider(
|
3029 |
+
label = "Refiner denoise start %",
|
3030 |
+
minimum = 0.7,
|
3031 |
+
maximum = 0.99,
|
3032 |
+
value = 0.95,
|
3033 |
+
step = 0.01
|
3034 |
)
|
3035 |
|
3036 |
with gr.Row():
|
3037 |
|
3038 |
+
refiner_steps_text_field = gr.HTML(
|
3039 |
+
value = ""
|
|
|
|
|
|
|
3040 |
)
|
3041 |
|
3042 |
with gr.Row():
|
3043 |
|
3044 |
+
refining_use_denoising_start_in_base_model_when_using_refiner_field = gr.Checkbox(
|
3045 |
+
label = "Use \"denoising_start\" value as \"denoising_end\" value in base model generation when using refiner",
|
3046 |
+
value = default_use_denoising_start_in_base_model_when_using_refiner_is_selected,
|
3047 |
+
interactive = True,
|
3048 |
+
container = True
|
|
|
3049 |
)
|
3050 |
|
3051 |
+
with gr.Row():
|
3052 |
|
3053 |
+
refining_base_model_output_to_refiner_is_in_latent_space_field = gr.Checkbox(
|
3054 |
+
label = "Base model output in latent space instead of PIL image when using refiner",
|
3055 |
+
value = default_base_model_output_to_refiner_is_in_latent_space_is_selected,
|
3056 |
+
interactive = True,
|
3057 |
+
container = True
|
3058 |
+
)
|
|
|
3059 |
|
3060 |
with gr.Accordion(
|
3061 |
+
label = "Refiner (Online configuration)",
|
3062 |
elem_id = "refiner_online_config_accordion_id",
|
3063 |
open = refiner_online_config_accordion_open,
|
3064 |
visible = refiner_online_config_accordion_visible
|
|
|
3067 |
#
|
3068 |
#
|
3069 |
#
|
3070 |
+
# Refiner (Online configuration)
|
3071 |
#
|
3072 |
#
|
3073 |
#
|
|
|
3123 |
|
3124 |
with gr.Row():
|
3125 |
|
3126 |
+
refining_number_of_iterations_for_online_config_field_visible = False
|
3127 |
+
|
3128 |
+
if default_model_configuration in model_configuration_include_refiner_number_of_steps_object:
|
3129 |
+
|
3130 |
+
refining_number_of_iterations_for_online_config_field_visible = True
|
3131 |
+
|
3132 |
refining_number_of_iterations_for_online_config_field = gr.Slider(
|
3133 |
label = "Refiner number of iterations",
|
3134 |
minimum = 1,
|
3135 |
maximum = 100,
|
3136 |
value = 100,
|
3137 |
+
step = 1,
|
3138 |
+
visible = refining_number_of_iterations_for_online_config_field_visible
|
3139 |
)
|
3140 |
|
3141 |
with gr.Group(
|
|
|
3306 |
|
3307 |
with gr.Column(scale = 1):
|
3308 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3309 |
with gr.Row():
|
3310 |
|
3311 |
if use_image_gallery == 1:
|
|
|
3337 |
|
3338 |
with gr.Row():
|
3339 |
|
3340 |
+
output_text_field = gr.Textbox(
|
3341 |
label = "Prompt Information:",
|
3342 |
value = "After an image is generated, its generation information will appear here." + additional_prompt_info_html,
|
3343 |
show_copy_button = True,
|
3344 |
+
lines = 10,
|
3345 |
+
max_lines = 20,
|
3346 |
+
every = None#,
|
3347 |
+
#container = False
|
3348 |
)
|
3349 |
|
3350 |
with gr.Row():
|
3351 |
|
3352 |
+
log_text_field = gr.Textbox(
|
3353 |
+
label = "Generation Progress:",
|
3354 |
+
|
3355 |
+
elem_id = "generation_progress_id",
|
3356 |
+
elem_classes = "",
|
3357 |
+
|
3358 |
+
interactive = False,
|
3359 |
value = "",
|
3360 |
+
show_copy_button = False,
|
3361 |
visible = False
|
3362 |
)
|
3363 |
|
3364 |
+
with gr.Row():
|
3365 |
+
|
3366 |
+
prompt_truncated_field = gr.HTML(
|
3367 |
+
value = "",
|
3368 |
+
visible = False
|
3369 |
+
)
|
3370 |
|
3371 |
if enable_close_command_prompt_button == 1:
|
3372 |
|
|
|
3377 |
|
3378 |
gr.Markdown("Closing the command prompt will cancel any images in the process of being created. You will need to launch it again to create more images.")
|
3379 |
|
3380 |
+
|
3381 |
+
|
3382 |
if len(ending_html) > 0:
|
3383 |
|
3384 |
with gr.Accordion(
|
|
|
3424 |
refiner_online_config_accordion,
|
3425 |
refining_selection_automatically_selected_message_field,
|
3426 |
refining_selection_online_config_normal_field,
|
3427 |
+
refining_selection_online_config_automatically_selected_field,
|
3428 |
+
refining_number_of_iterations_for_online_config_field
|
3429 |
],
|
3430 |
queue = None,
|
3431 |
show_progress = "hidden"
|
|
|
3439 |
outputs = [
|
3440 |
output_image_field,
|
3441 |
output_text_field
|
3442 |
+
],
|
3443 |
+
show_progress = "hidden"
|
3444 |
)
|
3445 |
|
3446 |
if (
|
|
|
3455 |
triggers_array.extend([
|
3456 |
refining_selection_default_config_field.change,
|
3457 |
refining_selection_online_config_normal_field.change,
|
3458 |
+
refining_selection_online_config_automatically_selected_field.change,
|
3459 |
+
model_configuration_field.change
|
3460 |
])
|
3461 |
|
3462 |
if enable_upscaler == 1:
|
|
|
3484 |
)
|
3485 |
|
3486 |
generate_image_btn_click_event = generate_image_btn.click(
|
3487 |
+
fn = before_create_image_function,
|
3488 |
+
inputs = [],
|
3489 |
+
outputs = [
|
3490 |
+
output_image_field,
|
3491 |
+
output_text_field,
|
3492 |
+
log_text_field,
|
3493 |
+
generate_image_btn
|
3494 |
+
],
|
3495 |
+
show_progress = "minimal",
|
3496 |
+
queue = True
|
3497 |
+
).then(
|
3498 |
fn = create_image_function,
|
3499 |
inputs = [
|
3500 |
base_model_field,
|
|
|
3508 |
base_model_num_inference_steps_field_for_sdxl_turbo_field,
|
3509 |
seed_field,
|
3510 |
|
3511 |
+
refining_selection_default_config_field,
|
3512 |
refining_selection_online_config_normal_field,
|
3513 |
refining_selection_online_config_automatically_selected_field,
|
3514 |
|
3515 |
+
refining_denoise_start_for_default_config_field,
|
3516 |
refining_use_denoising_start_in_base_model_when_using_refiner_field,
|
3517 |
refining_base_model_output_to_refiner_is_in_latent_space_field,
|
3518 |
|
|
|
3526 |
output_image_field,
|
3527 |
output_text_field,
|
3528 |
prompt_truncated_field
|
3529 |
+
],
|
3530 |
+
queue = True
|
3531 |
+
).then(
|
3532 |
+
fn = after_create_image_function,
|
3533 |
+
inputs = [],
|
3534 |
+
outputs = [
|
3535 |
+
output_text_field,
|
3536 |
+
log_text_field,
|
3537 |
+
generate_image_btn
|
3538 |
+
],
|
3539 |
+
queue = False
|
3540 |
)
|
3541 |
|
3542 |
+
|
3543 |
+
|
3544 |
+
sd_interface_load_kwargs = {
|
3545 |
+
"scroll_to_output": False,
|
3546 |
+
"show_progress": "full"
|
3547 |
+
}
|
3548 |
+
|
3549 |
+
if show_image_creation_progress_log == 1:
|
3550 |
+
|
3551 |
+
sd_interface_continuous = sd_interface.load(
|
3552 |
+
fn = update_log_progress,
|
3553 |
+
inputs = None,
|
3554 |
+
outputs = [
|
3555 |
+
log_text_field
|
3556 |
+
],
|
3557 |
+
every = 1,
|
3558 |
+
**sd_interface_load_kwargs
|
3559 |
+
)
|
3560 |
+
|
3561 |
+
else:
|
3562 |
+
|
3563 |
+
sd_interface_continuous = sd_interface.load(
|
3564 |
+
**sd_interface_load_kwargs
|
3565 |
+
)
|
3566 |
+
|
3567 |
+
|
3568 |
+
|
3569 |
if enable_close_command_prompt_button == 1:
|
3570 |
|
3571 |
# https://github.com/gradio-app/gradio/pull/2433/files
|
|
|
3580 |
|
3581 |
|
3582 |
sd_interface.queue(
|
|
|
3583 |
max_size = 20
|
3584 |
)
|
3585 |
|
|
|
3598 |
show_error = True,
|
3599 |
max_threads = 1
|
3600 |
)
|
|
|
|
|
|
|
|
|
|