Delete README.md
Browse files
README.md
DELETED
@@ -1,42 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: CatCon Controlnet SD 1 5 B2
|
3 |
-
emoji: 🐨
|
4 |
-
colorFrom: blue
|
5 |
-
colorTo: indigo
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 4.44.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: true
|
10 |
-
license: mit
|
11 |
-
tags:
|
12 |
-
- jax-diffusers-event
|
13 |
-
- jax
|
14 |
-
- Text-To-Image
|
15 |
-
- Diffusers
|
16 |
-
- Controlnet
|
17 |
-
- Stable-Diffusion
|
18 |
-
datasets:
|
19 |
-
- animelover/danbooru2022
|
20 |
-
models:
|
21 |
-
- Ryukijano/CatCon-Controlnet-WD-1-5-b2R
|
22 |
-
- Cognomen/CatCon-Controlnet-WD-1-5-b2
|
23 |
-
---
|
24 |
-
|
25 |
-
Experimental proof of concept made for the [Huggingface JAX/Diffusers community sprint](https://github.com/huggingface/community-events/tree/main/jax-controlnet-sprint)
|
26 |
-
|
27 |
-
[Demo available here](https://huggingface.co/spaces/Ryukijano/CatCon-One-Shot-Controlnet-SD-1-5-b2)
|
28 |
-
[My teammate's demo is available here] (https://huggingface.co/spaces/Cognomen/CatCon-Controlnet-WD-1-5-b2)
|
29 |
-
|
30 |
-
This is a controlnet for the Stable Diffusion checkpoint [Waifu Diffusion 1.5 beta 2](https://huggingface.co/waifu-diffusion/wd-1-5-beta2) which aims to guide image generation by conditioning outputs with patches of images from a common category of the training target examples. The current checkpoint has been trained for approx. 100k steps on a filtered subset of [Danbooru 2021](https://gwern.net/danbooru2021) using artists as the conditioned category with the aim of learning robust style transfer from an image example.
|
31 |
-
|
32 |
-
Major limitations:
|
33 |
-
|
34 |
-
- The current checkpoint was trained on 768x768 crops without aspect ratio checkpointing. Loss in coherence for non-square aspect ratios can be expected.
|
35 |
-
- The training dataset is extremely noisy and used without filtering stylistic outliers from within each category, so performance may be less than ideal. A more diverse dataset with a larger variety of styles and categories would likely have better performance.
|
36 |
-
- The Waifu Diffusion base model is a hybrid anime/photography model, and can unpredictably jump between those modalities.
|
37 |
-
- As styling is sensitive to divergences in model checkpoints, the capabilities of this controlnet are not expected to predictably apply to other SD 2.X checkpoints.
|
38 |
-
|
39 |
-
Waifu Diffusion 1.5 beta 2 is licensed under [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/). This controlnet imposes no restrictions beyond the MIT license, but it cannot be used independently of a base model.
|
40 |
-
---
|
41 |
-
|
42 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|