Fre2C commited on
Commit
db7da99
1 Parent(s): ea040df

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -8
README.md CHANGED
@@ -9,7 +9,17 @@ license: creativeml-openrail-m
9
 
10
  **你可以用它尝试任何东西!**
11
 
12
- **从AOM3:https://huggingface.co/WarriorMama777/OrangeMixs 和Anything:https://civitai.com/models/9409/or-anything-v5 知道了很多模型和学习了很多混合方面的经验,十分感谢。**
 
 
 
 
 
 
 
 
 
 
13
 
14
  **使用建议:**
15
 
@@ -25,9 +35,7 @@ license: creativeml-openrail-m
25
 
26
  我一般在效果不符合预期时使用clip2
27
 
28
- 如果想要**更强**的**明暗对比**效果,可以尝试**光影类lora**,比如**epi_noiseoffset:** https://civitai.com/models/13941/epinoiseoffset
29
-
30
- 如果你想要更多的亚洲面孔,**人物类lora**是很好的选择,比如**Doll Likeness**系列:https://huggingface.co/Kanbara/doll-likeness-series
31
 
32
 
33
 
@@ -37,7 +45,19 @@ All preview images do not use embedding,lora
37
 
38
  **You can try anything with it!**
39
 
40
- **I know a lot of models and learn a lot of mixing experience from AOM3:https://huggingface.co/WarriorMama777/OrangeMixs and Anything:https://civitai.com/models/9409/or-anything-v5, thank you very much**
 
 
 
 
 
 
 
 
 
 
 
 
41
 
42
  **Suggestions for use:**
43
 
@@ -53,9 +73,7 @@ If you feel that the content of the picture is **not rich enough**, You can try
53
 
54
  I usually use clip2 when the results don't meet expectations
55
 
56
- If you want a **stronger contrast** between **light and dark**, you can try **light and shadow lora**, such as **epi_noiseoffset:** https://civitai.com/models/13941/epinoiseoffset
57
-
58
- If you want more Asian faces, the **Character class lora** is a good choice, such as the **Doll Likeness** series: https://huggingface.co/Kanbara/doll-likeness-series
59
 
60
 
61
  我使用这两个VAE/I use these two VAEs:
 
9
 
10
  **你可以用它尝试任何东西!**
11
 
12
+ **从以下地方学习了很多,十分感谢。**
13
+
14
+ https://huggingface.co/WarriorMama777/OrangeMixs
15
+
16
+ https://civitai.com/models/9409/or-anything-v5
17
+
18
+ https://economylife.net/u-net-marge-webui1111/
19
+
20
+ https://docs.qq.com/doc/DTkRodlJ1c1VzcFBr?u=e7c714671e694797a04f1d58aff5c8b0
21
+
22
+ https://docs.qq.com/doc/DQ1Vzd3VCTllFaXBv?_t=1685979317852&u=e7c714671e694797a04f1d58aff5c8b0
23
 
24
  **使用建议:**
25
 
 
35
 
36
  我一般在效果不符合预期时使用clip2
37
 
38
+ **随你喜好使用lora!**
 
 
39
 
40
 
41
 
 
45
 
46
  **You can try anything with it!**
47
 
48
+ **I have learned a lot from the following places, thank you very much.**
49
+
50
+ https://huggingface.co/WarriorMama777/OrangeMixs
51
+
52
+ https://civitai.com/models/9409/or-anything-v5
53
+
54
+ https://economylife.net/u-net-marge-webui1111/
55
+
56
+ https://rentry.org/Merge_Block_Weight_-china-_v1_Beta#1-introduction(This is the translated version)
57
+
58
+ https://docs.qq.com/doc/DQ1Vzd3VCTllFaXBv?_t=1685979317852&u=e7c714671e694797a04f1d58aff5c8b0
59
+
60
+ https://www.figma.com/file/1JYEljsTwm6qRwR665yI7w/Merging-lab%E3%80%8CHosioka-Fork%E3%80%8D?type=design&node-id=1-69
61
 
62
  **Suggestions for use:**
63
 
 
73
 
74
  I usually use clip2 when the results don't meet expectations
75
 
76
+ Use lora as you like!
 
 
77
 
78
 
79
  我使用这两个VAE/I use these two VAEs: