comfyui-workflow / SDXL_tidy-workflow-template.json
marhensa's picture
Upload 6 files
75534de
raw
history blame
26.7 kB
{
"last_node_id": 49,
"last_link_id": 46,
"nodes": [
{
"id": 40,
"type": "Note",
"pos": [
750,
1280
],
"size": {
"0": 451.5049743652344,
"1": 424.4164123535156
},
"flags": {},
"order": 0,
"mode": 0,
"title": "Note - KSampler ADVANCED General Information",
"properties": {
"text": ""
},
"widgets_values": [
"Here are the settings that SHOULD stay in place if you want this workflow to work correctly:\n - add_noise: enable = This adds random noise into the picture so the model can denoise it\n\n - return_with_leftover_noise: enable = This sends the latent image data and all it's leftover noise to the next KSampler node.\n\nThe settings to pay attention to:\n - control_after_generate = generates a new random seed after each workflow job completed.\n - steps = This is the amount of iterations you would like to run the positive and negative CLIP prompts through. Each Step will add (positive) or remove (negative) pixels based on what stable diffusion \"thinks\" should be there according to the model's training\n - cfg = This is how much you want SDXL to adhere to the prompt. Lower CFG gives you more creative but often blurrier results. Higher CFG (recommended max 10) gives you stricter results according to the CLIP prompt. If the CFG value is too high, it can also result in \"burn-in\" where the edges of the picture become even stronger, often highlighting details in unnatural ways.\n - sampler_name = This is the sampler type, and unfortunately different samplers and schedulers have better results with fewer steps, while others have better success with higher steps. This will require experimentation on your part!\n - scheduler = The algorithm/method used to choose the timesteps to denoise the picture.\n - start_at_step = This is the step number the KSampler will start out it's process of de-noising the picture or \"removing the random noise to reveal the picture within\". The first KSampler usually starts with Step 0. Starting at step 0 is the same as setting denoise to 1.0 in the regular Sampler node.\n - end_at_step = This is the step number the KSampler will stop it's process of de-noising the picture. If there is any remaining leftover noise and return_with_leftover_noise is enabled, then it will pass on the left over noise to the next KSampler (assuming there is another one)."
],
"color": "#223",
"bgcolor": "#335"
},
{
"id": 43,
"type": "Note",
"pos": [
495,
1409
],
"size": {
"0": 240,
"1": 80
},
"flags": {},
"order": 1,
"mode": 0,
"title": "Note - CLIP Encode (REFINER)",
"properties": {
"text": ""
},
"widgets_values": [
"These nodes receive the text from the prompt and use the optimal CLIP settings for the specified checkpoint model (in this case: SDXL Refiner)"
],
"color": "#323",
"bgcolor": "#535"
},
{
"id": 36,
"type": "Note",
"pos": [
165,
1295
],
"size": {
"0": 315.70074462890625,
"1": 147.9551239013672
},
"flags": {},
"order": 2,
"mode": 0,
"title": "Note - Load Checkpoint BASE",
"properties": {
"text": ""
},
"widgets_values": [
"This is a checkpoint model loader. \n - This is set up automatically with the optimal settings for whatever SD model version you choose to use.\n - In this example, it is for the Base SDXL model\n - This node is also used for SD1.5 and SD2.x models\n \nNOTE: When loading in another person's workflow, be sure to manually choose your own *local* model. This also applies to LoRas and all their deviations"
],
"color": "#323",
"bgcolor": "#535"
},
{
"id": 38,
"type": "Note",
"pos": [
-135,
1297
],
"size": {
"0": 284.3257141113281,
"1": 123.88604736328125
},
"flags": {},
"order": 3,
"mode": 0,
"title": "Note - Text Prompts",
"properties": {
"text": ""
},
"widgets_values": [
"These nodes are where you include the text for:\n - what you want in the picture (Positive Prompt, Green)\n - or what you don't want in the picture (Negative Prompt, Red)\n\nThis node type is called a \"PrimitiveNode\" if you are searching for the node type."
],
"color": "#323",
"bgcolor": "#535"
},
{
"id": 42,
"type": "Note",
"pos": [
-131,
1465
],
"size": {
"0": 260,
"1": 210
},
"flags": {},
"order": 4,
"mode": 0,
"title": "Note - Empty Latent Image",
"properties": {
"text": ""
},
"widgets_values": [
"This node sets the image's resolution in Width and Height.\n\nNOTE: For SDXL, it is recommended to use trained values listed below:\n - 1024 x 1024\n - 1152 x 896\n - 896 x 1152\n - 1216 x 832\n - 832 x 1216\n - 1344 x 768\n - 768 x 1344\n - 1536 x 640\n - 640 x 1536"
],
"color": "#323",
"bgcolor": "#535"
},
{
"id": 37,
"type": "Note",
"pos": [
156,
1483
],
"size": {
"0": 330,
"1": 140
},
"flags": {},
"order": 5,
"mode": 0,
"title": "Note - Load Checkpoint REFINER",
"properties": {
"text": ""
},
"widgets_values": [
"This is a checkpoint model loader. \n - This is set up automatically with the optimal settings for whatever SD model version you choose to use.\n - In this example, it is for the Refiner SDXL model\n\nNOTE: When loading in another person's workflow, be sure to manually choose your own *local* model. This also applies to LoRas and all their deviations."
],
"color": "#323",
"bgcolor": "#535"
},
{
"id": 41,
"type": "Note",
"pos": [
1442,
1280
],
"size": {
"0": 320,
"1": 120
},
"flags": {},
"order": 6,
"mode": 0,
"title": "Note - VAE Decoder",
"properties": {
"text": ""
},
"widgets_values": [
"This node will take the latent data from the KSampler and, using the VAE, it will decode it into visible data\n\nVAE = Latent --> Visible\n\nThis can then be sent to the Save Image node to be saved as a PNG."
],
"color": "#332922",
"bgcolor": "#593930"
},
{
"id": 4,
"type": "CheckpointLoaderSimple",
"pos": [
22.061434312988258,
342.1210472519528
],
"size": {
"0": 350,
"1": 100
},
"flags": {},
"order": 7,
"mode": 0,
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"links": [
10
],
"slot_index": 0
},
{
"name": "CLIP",
"type": "CLIP",
"links": [
3,
5
],
"slot_index": 1
},
{
"name": "VAE",
"type": "VAE",
"links": [],
"slot_index": 2
}
],
"title": "Load Checkpoint - BASE",
"properties": {
"Node name for S&R": "CheckpointLoaderSimple"
},
"widgets_values": [
"sd_xl_base_1.0.safetensors"
],
"color": "#323",
"bgcolor": "#535"
},
{
"id": 17,
"type": "VAEDecode",
"pos": [
1059.2629181469717,
193.95553504150257
],
"size": {
"0": 200,
"1": 50
},
"flags": {},
"order": 22,
"mode": 0,
"inputs": [
{
"name": "samples",
"type": "LATENT",
"link": 25
},
{
"name": "vae",
"type": "VAE",
"link": 46
}
],
"outputs": [
{
"name": "IMAGE",
"type": "IMAGE",
"links": [
28
],
"shape": 3,
"slot_index": 0
}
],
"properties": {
"Node name for S&R": "VAEDecode"
},
"color": "#332922",
"bgcolor": "#593930"
},
{
"id": 14,
"type": "PrimitiveNode",
"pos": [
524.5762590326644,
88.94741499576233
],
"size": {
"0": 492.8952941894531,
"1": 150.1459503173828
},
"flags": {},
"order": 8,
"mode": 0,
"outputs": [
{
"name": "STRING",
"type": "STRING",
"links": [
18,
22
],
"widget": {
"name": "text",
"config": [
"STRING",
{
"multiline": true
}
]
},
"slot_index": 0
}
],
"title": "Negative Prompt (Text)",
"properties": {},
"widgets_values": [
"deformed iris, deformed pupils, (semi-realistic, cgi, 2.5d, 3d, sketch, cartoon, drawing, anime:1.2), frame, mirror, polaroid, dark environment"
],
"color": "#322",
"bgcolor": "#533"
},
{
"id": 6,
"type": "CLIPTextEncode",
"pos": [
416,
340
],
"size": {
"0": 210,
"1": 54
},
"flags": {},
"order": 18,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 3
},
{
"name": "text",
"type": "STRING",
"link": 16,
"widget": {
"name": "text",
"config": [
"STRING",
{
"multiline": true
}
]
},
"slot_index": 1
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"links": [
11
],
"slot_index": 0
}
],
"properties": {
"Node name for S&R": "CLIPTextEncode"
},
"widgets_values": [
"realistic, photograph of samurai cat wears battle armor, background is sakura garden"
],
"color": "#232",
"bgcolor": "#353"
},
{
"id": 7,
"type": "CLIPTextEncode",
"pos": [
417,
434
],
"size": {
"0": 210,
"1": 54
},
"flags": {},
"order": 16,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 5
},
{
"name": "text",
"type": "STRING",
"link": 18,
"widget": {
"name": "text",
"config": [
"STRING",
{
"multiline": true
}
]
},
"slot_index": 1
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"links": [
12
],
"slot_index": 0
}
],
"properties": {
"Node name for S&R": "CLIPTextEncode"
},
"widgets_values": [
"text, watermark, anime, cartoon, 2d, 2.5d, 3d"
],
"color": "#322",
"bgcolor": "#533"
},
{
"id": 15,
"type": "CLIPTextEncode",
"pos": [
413,
583
],
"size": {
"0": 210,
"1": 54
},
"flags": {},
"order": 19,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 19
},
{
"name": "text",
"type": "STRING",
"link": 21,
"widget": {
"name": "text",
"config": [
"STRING",
{
"multiline": true
}
]
},
"slot_index": 1
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"links": [
23
],
"slot_index": 0
}
],
"properties": {
"Node name for S&R": "CLIPTextEncode"
},
"widgets_values": [
"realistic, photograph of samurai cat wears battle armor, background is sakura garden"
],
"color": "#232",
"bgcolor": "#353"
},
{
"id": 16,
"type": "CLIPTextEncode",
"pos": [
413,
676
],
"size": {
"0": 210,
"1": 54
},
"flags": {},
"order": 17,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 20
},
{
"name": "text",
"type": "STRING",
"link": 22,
"widget": {
"name": "text",
"config": [
"STRING",
{
"multiline": true
}
]
},
"slot_index": 1
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"links": [
24
],
"slot_index": 0
}
],
"properties": {
"Node name for S&R": "CLIPTextEncode"
},
"widgets_values": [
"text, watermark, anime, cartoon, 2d, 2.5d, 3d"
],
"color": "#322",
"bgcolor": "#533"
},
{
"id": 45,
"type": "PrimitiveNode",
"pos": [
271.55371497288235,
815.7191843080128
],
"size": {
"0": 210,
"1": 82
},
"flags": {},
"order": 9,
"mode": 0,
"outputs": [
{
"name": "INT",
"type": "INT",
"links": [
38,
41
],
"widget": {
"name": "steps",
"config": [
"INT",
{
"default": 20,
"min": 1,
"max": 10000
}
]
}
}
],
"title": "steps",
"properties": {},
"widgets_values": [
25,
"fixed"
],
"color": "#432",
"bgcolor": "#653"
},
{
"id": 39,
"type": "Note",
"pos": [
496,
1288
],
"size": {
"0": 238.26402282714844,
"1": 80.99152374267578
},
"flags": {},
"order": 10,
"mode": 0,
"title": "Note - CLIP Encode (BASE)",
"properties": {
"text": ""
},
"widgets_values": [
"These nodes receive the text from the prompt and use the optimal CLIP settings for the specified checkpoint model (in this case: SDXL Base)"
],
"color": "#323",
"bgcolor": "#535"
},
{
"id": 48,
"type": "Note",
"pos": [
1215,
1282
],
"size": {
"0": 213.90769958496094,
"1": 110.17156982421875
},
"flags": {},
"order": 11,
"mode": 0,
"title": "Note - Step Control",
"properties": {
"text": ""
},
"widgets_values": [
"These can be used to control the total sampling steps and the step at which the sampling switches to the refiner."
],
"color": "#432",
"bgcolor": "#653"
},
{
"id": 47,
"type": "PrimitiveNode",
"pos": [
490,
815
],
"size": {
"0": 210,
"1": 82
},
"flags": {},
"order": 12,
"mode": 0,
"outputs": [
{
"name": "INT",
"type": "INT",
"links": [
43,
44
],
"widget": {
"name": "end_at_step",
"config": [
"INT",
{
"default": 10000,
"min": 0,
"max": 10000
}
]
},
"slot_index": 0
}
],
"title": "end_at_step",
"properties": {},
"widgets_values": [
20,
"fixed"
],
"color": "#432",
"bgcolor": "#653"
},
{
"id": 12,
"type": "CheckpointLoaderSimple",
"pos": [
23.306195155884314,
538.5501664140626
],
"size": {
"0": 350,
"1": 100
},
"flags": {},
"order": 13,
"mode": 0,
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"links": [
14
],
"shape": 3,
"slot_index": 0
},
{
"name": "CLIP",
"type": "CLIP",
"links": [
19,
20
],
"shape": 3,
"slot_index": 1
},
{
"name": "VAE",
"type": "VAE",
"links": [
46
],
"shape": 3,
"slot_index": 2
}
],
"title": "Load Checkpoint - REFINER",
"properties": {
"Node name for S&R": "CheckpointLoaderSimple"
},
"widgets_values": [
"sd_xl_refiner_1.0.safetensors"
],
"color": "#323",
"bgcolor": "#535"
},
{
"id": 19,
"type": "SaveImage",
"pos": [
1293,
60
],
"size": {
"0": 620.4963989257812,
"1": 829.7885131835938
},
"flags": {},
"order": 23,
"mode": 0,
"inputs": [
{
"name": "images",
"type": "IMAGE",
"link": 28
}
],
"properties": {},
"widgets_values": [
"ComfyUI"
],
"color": "#222",
"bgcolor": "#000"
},
{
"id": 5,
"type": "EmptyLatentImage",
"pos": [
27,
746
],
"size": {
"0": 218.0976104736328,
"1": 106
},
"flags": {},
"order": 14,
"mode": 0,
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
27
],
"slot_index": 0
}
],
"properties": {
"Node name for S&R": "EmptyLatentImage"
},
"widgets_values": [
1080,
1344,
1
],
"color": "#323",
"bgcolor": "#535"
},
{
"id": 13,
"type": "PrimitiveNode",
"pos": [
22.576259032664293,
89.94741499576233
],
"size": {
"0": 489.5143737792969,
"1": 150.1459503173828
},
"flags": {},
"order": 15,
"mode": 0,
"outputs": [
{
"name": "STRING",
"type": "STRING",
"links": [
16,
21
],
"widget": {
"name": "text",
"config": [
"STRING",
{
"multiline": true
}
]
},
"slot_index": 0
}
],
"title": "Positive Prompt (Text)",
"properties": {},
"widgets_values": [
"(highest quality), masterpiece, intricate, high detail, professional photo, 8k uhd, sharp focus, (realistic photograph:1.2), (half a cat, half a futuristic cyborg white porcelain AI cat, in a vrtual high tech environment:1.2), bright environment"
],
"color": "#232",
"bgcolor": "#353"
},
{
"id": 10,
"type": "KSamplerAdvanced",
"pos": [
669,
301
],
"size": {
"0": 302.49639892578125,
"1": 603.7885131835938
},
"flags": {},
"order": 20,
"mode": 0,
"inputs": [
{
"name": "model",
"type": "MODEL",
"link": 10
},
{
"name": "positive",
"type": "CONDITIONING",
"link": 11
},
{
"name": "negative",
"type": "CONDITIONING",
"link": 12
},
{
"name": "latent_image",
"type": "LATENT",
"link": 27
},
{
"name": "steps",
"type": "INT",
"link": 41,
"widget": {
"name": "steps",
"config": [
"INT",
{
"default": 20,
"min": 1,
"max": 10000
}
]
},
"slot_index": 4
},
{
"name": "end_at_step",
"type": "INT",
"link": 43,
"widget": {
"name": "end_at_step",
"config": [
"INT",
{
"default": 10000,
"min": 0,
"max": 10000
}
]
},
"slot_index": 5
}
],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
13
],
"shape": 3,
"slot_index": 0
}
],
"title": "KSampler (Advanced) - BASE",
"properties": {
"Node name for S&R": "KSamplerAdvanced"
},
"widgets_values": [
"enable",
498946983519107,
"randomize",
25,
8,
"euler",
"normal",
0,
20,
"enable"
],
"color": "#223",
"bgcolor": "#335"
},
{
"id": 11,
"type": "KSamplerAdvanced",
"pos": [
979,
300
],
"size": {
"0": 301.49639892578125,
"1": 605.7885131835938
},
"flags": {},
"order": 21,
"mode": 0,
"inputs": [
{
"name": "model",
"type": "MODEL",
"link": 14,
"slot_index": 0
},
{
"name": "positive",
"type": "CONDITIONING",
"link": 23
},
{
"name": "negative",
"type": "CONDITIONING",
"link": 24
},
{
"name": "latent_image",
"type": "LATENT",
"link": 13
},
{
"name": "steps",
"type": "INT",
"link": 38,
"widget": {
"name": "steps",
"config": [
"INT",
{
"default": 20,
"min": 1,
"max": 10000
}
]
},
"slot_index": 4
},
{
"name": "start_at_step",
"type": "INT",
"link": 44,
"widget": {
"name": "start_at_step",
"config": [
"INT",
{
"default": 0,
"min": 0,
"max": 10000
}
]
}
}
],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
25
],
"shape": 3,
"slot_index": 0
}
],
"title": "KSampler (Advanced) - REFINER",
"properties": {
"Node name for S&R": "KSamplerAdvanced"
},
"widgets_values": [
"disable",
0,
"fixed",
25,
8,
"euler",
"normal",
20,
10000,
"disable"
],
"color": "#223",
"bgcolor": "#335"
}
],
"links": [
[
3,
4,
1,
6,
0,
"CLIP"
],
[
5,
4,
1,
7,
0,
"CLIP"
],
[
10,
4,
0,
10,
0,
"MODEL"
],
[
11,
6,
0,
10,
1,
"CONDITIONING"
],
[
12,
7,
0,
10,
2,
"CONDITIONING"
],
[
13,
10,
0,
11,
3,
"LATENT"
],
[
14,
12,
0,
11,
0,
"MODEL"
],
[
16,
13,
0,
6,
1,
"STRING"
],
[
18,
14,
0,
7,
1,
"STRING"
],
[
19,
12,
1,
15,
0,
"CLIP"
],
[
20,
12,
1,
16,
0,
"CLIP"
],
[
21,
13,
0,
15,
1,
"STRING"
],
[
22,
14,
0,
16,
1,
"STRING"
],
[
23,
15,
0,
11,
1,
"CONDITIONING"
],
[
24,
16,
0,
11,
2,
"CONDITIONING"
],
[
25,
11,
0,
17,
0,
"LATENT"
],
[
27,
5,
0,
10,
3,
"LATENT"
],
[
28,
17,
0,
19,
0,
"IMAGE"
],
[
38,
45,
0,
11,
4,
"INT"
],
[
41,
45,
0,
10,
4,
"INT"
],
[
43,
47,
0,
10,
5,
"INT"
],
[
44,
47,
0,
11,
5,
"INT"
],
[
46,
12,
2,
17,
1,
"VAE"
]
],
"groups": [
{
"title": "Base Prompt",
"bounding": [
400,
266,
239,
237
],
"color": "#3f789e"
},
{
"title": "Refiner Prompt",
"bounding": [
401,
510,
235,
229
],
"color": "#3f789e"
},
{
"title": "Text Prompts",
"bounding": [
11,
11,
1026,
246
],
"color": "#3f789e"
},
{
"title": "Load in BASE SDXL Model",
"bounding": [
12,
268,
374,
188
],
"color": "#a1309b"
},
{
"title": "Load in REFINER SDXL Model",
"bounding": [
10,
464,
376,
195
],
"color": "#a1309b"
},
{
"title": "Empty Latent Image",
"bounding": [
12,
672,
244,
193
],
"color": "#a1309b"
},
{
"title": "VAE Decoder",
"bounding": [
1043,
118,
236,
140
],
"color": "#b06634"
},
{
"title": "Step Control",
"bounding": [
262,
744,
452,
166
],
"color": "#3f789e"
}
],
"config": {},
"extra": {},
"version": 0.4
}