Edit model card

Kolors-ControlNet-Canny weights and inference code

📖 Introduction

We provide two ControlNet weights and inference code based on Kolors-Basemodel: Canny and Depth. You can find some example images below.

1、ControlNet Demos

2、ControlNet and IP-Adapter-Plus Demos

We also support joint inference code between Kolors-IPadapter and Kolors-ControlNet.

📊 Evaluation

To evaluate the performance of models, we compiled a test set of more than 200 images and text prompts. We invite several image experts to provide fair ratings for the generated results of different models. The experts rate the generated images based on four criteria: visual appeal, text faithfulness, conditional controllability, and overall satisfaction. Conditional controllability measures controlnet's ability to preserve spatial structure, while the other criteria follow the evaluation standards of BaseModel. The specific results are summarized in the table below, where Kolors-ControlNet achieved better performance in various criterias.

1、Canny

Model Average Overall Satisfaction Average Visual Appeal Average Text Faithfulness Average Conditional Controllability
SDXL-ControlNet-Canny 3.14 3.63 4.37 2.84
Kolors-ControlNet-Canny 4.06 4.64 4.45 3.52

2、Depth

Model Average Overall Satisfaction Average Visual Appeal Average Text Faithfulness Average Conditional Controllability
SDXL-ControlNet-Depth 3.35 3.77 4.26 4.5
Kolors-ControlNet-Depth 4.12 4.12 4.62 4.6

The SDXL-ControlNet-Canny and SDXL-ControlNet-Depth load DreamShaper-XL as backbone model.


🛠️ Usage

Requirements

The dependencies and installation are basically the same as the Kolors-BaseModel.

Weights download:

# Canny - ControlNet
huggingface-cli download --resume-download Kwai-Kolors/Kolors-ControlNet-Canny --local-dir weights/Kolors-ControlNet-Canny

# Depth - ControlNet
huggingface-cli download --resume-download Kwai-Kolors/Kolors-ControlNet-Depth --local-dir weights/Kolors-ControlNet-Depth

If you intend to utilize the depth estimation network, please make sure to download its corresponding model weights.

huggingface-cli download lllyasviel/Annotators ./dpt_hybrid-midas-501f0c75.pt --local-dir ./controlnet/annotator/ckpts  

Inference:

a. Using canny ControlNet:

python ./controlnet/sample_controlNet.py ./controlnet/assets/woman_1.png 一个漂亮的女孩,高品质,超清晰,色彩鲜艳,超高分辨率,最佳品质,8k,高清,4K Canny

python ./controlnet/sample_controlNet.py ./controlnet/assets/dog.png 全景,一只可爱的白色小狗坐在杯子里,看向镜头,动漫风格,3d渲染,辛烷值渲染 Canny

# The image will be saved to "controlnet/outputs/"

b. Using depth ControlNet:

python ./controlnet/sample_controlNet.py ./controlnet/assets/woman_2.png 新海诚风格,丰富的色彩,穿着绿色衬衫的女人站在田野里,唯美风景,清新明亮,斑驳的光影,最好的质量,超细节,8K画质 Depth

python ./controlnet/sample_controlNet.py ./controlnet/assets/bird.png 一只颜色鲜艳的小鸟,高品质,超清晰,色彩鲜艳,超高分辨率,最佳品质,8k,高清,4K Depth

# The image will be saved to "controlnet/outputs/"

c. Using depth ControlNet + IP-Adapter-Plus:

If you intend to utilize the kolors-ip-adapter-plus, please make sure to download its corresponding model weights.

python ./controlnet/sample_controlNet_ipadapter.py ./controlnet/assets/woman_2.png ./ipadapter/asset/2.png  一个红色头发的女孩,唯美风景,清新明亮,斑驳的光影,最好的质量,超细节,8K画质 Depth

# The image will be saved to "controlnet/outputs/"

Acknowledgments

  • Thanks to ControlNet for providing the codebase.

Downloads last month
3,014
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using Kwai-Kolors/Kolors-ControlNet-Canny 3