Update README.md
Browse files
README.md
CHANGED
@@ -27,8 +27,6 @@ May be the Best Quality Step 6-10 Model, In some details, it surpasses the Flux.
|
|
27 |
Based on **[Flux-Fusion-V2](https://huggingface.co/Anibaaal/Flux-Fusion-V2-4step-merge-gguf-nf4/tree/main)**, Merge of **[flux-dev-de-distill](https://huggingface.co/nyanko7/flux-dev-de-distill/tree/main)**, finetuned by **[ComfyUI](https://github.com/comfyanonymous/ComfyUI)**, **[Block_Patcher_ComfyUI](https://github.com/cubiq/Block_Patcher_ComfyUI)**, **[ComfyUI_essentials](https://github.com/cubiq/ComfyUI_essentials)** and other tools.
|
28 |
Recommended 6-10 steps. Greatly improved quality compared to other Flux.1 model.
|
29 |
|
30 |
-
------
|
31 |
-
|
32 |
![](./compare.jpg)
|
33 |
|
34 |
应网友要求,特提供 GGUF Q8_0 量化版本模型文件,针对该版本模型文件,由于时间关系,未做详细测试,可能出图细节与 fp8 模型略微有细小差异,但应该相差不大,同时,模型文件加载方式和模型转换方法,已在下面附加描述,测试图片效果如下:
|
@@ -36,7 +34,7 @@ Recommended 6-10 steps. Greatly improved quality compared to other Flux.1 model.
|
|
36 |
At the request, GGUF Q8_0 quantified version of the model file is provided, for this version of the model file, due to the time relationship, no detailed test has been done, the details of the drawing may be slightly different from the fp8 model, but it should not be much different, at the same time, the model file loading method and model conversion method have been described below.
|
37 |
|
38 |
![](./gguf-sample.jpg)
|
39 |
-
|
40 |
# Recommend:
|
41 |
|
42 |
**UNET versions** (Model only) need Text Encoders and VAE, I recommend use below CLIP and Text Encoder model, will get better prompt guidance:
|
@@ -50,8 +48,6 @@ At the request, GGUF Q8_0 quantified version of the model file is provided, for
|
|
50 |
|
51 |
![](./workflow.png)
|
52 |
|
53 |
-
------
|
54 |
-
|
55 |
# Thanks for:
|
56 |
|
57 |
https://huggingface.co/Anibaaal, Flux-Fusion is a very good mix and tuned model.
|
@@ -69,7 +65,6 @@ https://huggingface.co/John6666, Share the model convert script and the model co
|
|
69 |
https://github.com/city96/ComfyUI-GGUF, Native support GGUF Quantization Model.
|
70 |
|
71 |
https://github.com/leejet/stable-diffusion.cpp, Provider pure C/C++ GGUF model convert scripts.
|
72 |
-
------
|
73 |
|
74 |
## LICENSE
|
75 |
|
|
|
27 |
Based on **[Flux-Fusion-V2](https://huggingface.co/Anibaaal/Flux-Fusion-V2-4step-merge-gguf-nf4/tree/main)**, Merge of **[flux-dev-de-distill](https://huggingface.co/nyanko7/flux-dev-de-distill/tree/main)**, finetuned by **[ComfyUI](https://github.com/comfyanonymous/ComfyUI)**, **[Block_Patcher_ComfyUI](https://github.com/cubiq/Block_Patcher_ComfyUI)**, **[ComfyUI_essentials](https://github.com/cubiq/ComfyUI_essentials)** and other tools.
|
28 |
Recommended 6-10 steps. Greatly improved quality compared to other Flux.1 model.
|
29 |
|
|
|
|
|
30 |
![](./compare.jpg)
|
31 |
|
32 |
应网友要求,特提供 GGUF Q8_0 量化版本模型文件,针对该版本模型文件,由于时间关系,未做详细测试,可能出图细节与 fp8 模型略微有细小差异,但应该相差不大,同时,模型文件加载方式和模型转换方法,已在下面附加描述,测试图片效果如下:
|
|
|
34 |
At the request, GGUF Q8_0 quantified version of the model file is provided, for this version of the model file, due to the time relationship, no detailed test has been done, the details of the drawing may be slightly different from the fp8 model, but it should not be much different, at the same time, the model file loading method and model conversion method have been described below.
|
35 |
|
36 |
![](./gguf-sample.jpg)
|
37 |
+
|
38 |
# Recommend:
|
39 |
|
40 |
**UNET versions** (Model only) need Text Encoders and VAE, I recommend use below CLIP and Text Encoder model, will get better prompt guidance:
|
|
|
48 |
|
49 |
![](./workflow.png)
|
50 |
|
|
|
|
|
51 |
# Thanks for:
|
52 |
|
53 |
https://huggingface.co/Anibaaal, Flux-Fusion is a very good mix and tuned model.
|
|
|
65 |
https://github.com/city96/ComfyUI-GGUF, Native support GGUF Quantization Model.
|
66 |
|
67 |
https://github.com/leejet/stable-diffusion.cpp, Provider pure C/C++ GGUF model convert scripts.
|
|
|
68 |
|
69 |
## LICENSE
|
70 |
|