File size: 3,411 Bytes
1e606c6
 
 
 
474622a
1e606c6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5ea8889
1e606c6
 
5ea8889
 
 
 
1e606c6
 
 
89483d5
1e606c6
 
 
f73cc3b
 
1e606c6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b1da0d8
ad479aa
1e606c6
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
pipeline_tag: text-to-image
widget:
- text: >-
     movie scene screencap, cinematic footage. thanos smelling a little yellow rose. extreme wide angle,
  output:
    url: 1man.png 
- text: >-
     A tiny robot taking a break under a tree in the garden 
  output:
    url: robot.png 
- text: >-
     mystery
  output:
    url: mystery.png
- text: >-
     a cat wearing sunglasses in the summer
  output:
    url: cat.png
- text: >-
    robot holding a sign that says ’a storm is coming’ 
  output:
    url: storm.png
- text: >-
    the vibrance of the human soul
  output:
    url: soul.png
- text: >-
    Lady of War, chique dark clothes, vinyl, imposing pose, anime style, 90s natural photography of a man, glasses, cinematic,
  output:
    url: anime.png
- text: >-
    natural photography of a man, glasses, cinematic, 
  output:
    url: glasses.png

license: cc-by-nc-nd-4.0
---
<Gallery />

# Constructive Deconstruction: Domain-Agnostic Debiasing of Diffusion Models

A paper is currently in the works. We believe the breakthrough and said release of the weights should come BEFORE any paper or wait period. 

## Introduction

Constructive Deconstruction is a novel approach to debiasing diffusion models used in generative tasks like image synthesis. This method enhances the quality and fidelity of generated images across various domains by removing biases inherited from the training data. Our technique involves overtraining the model to a controlled noisy state, applying nightshading, and using bucketing techniques to realign the model's internal representations.

## Methodology

### Overtraining to Controlled Noisy State
By purposely overtraining the model until it predictably fails, we create a controlled noisy state. This state helps in identifying and addressing the inherent biases in the model's training data.

### Nightshading
Nightshading is repurposed to induce a controlled failure, making it easier to retrain the model. This involves injecting carefully selected data points to stress the model and cause predictable failures.

### Bucketing
Using mathematical techniques like slerp (Spherical Linear Interpolation) and bislerp (Bilinear Interpolation), we merge the induced noise back into the model. This step highlights the model's learned knowledge while suppressing biases.

### Retraining and Fine-Tuning
The noisy state is retrained on a large, diverse dataset to create a new base model called "Mobius." Initial issues such as grainy details and inconsistent colors are resolved during fine-tuning, resulting in high-quality, unbiased outputs.

## Results and Highlights

### Increased Diversity of Outputs
Training the model on high-quality data naturally increases the diversity of the generated outputs without intentionally loosening associations. This leads to improved generalization and variety in generated images.

### Empirical Validation
Extensive experiments and fine-tuning demonstrate the effectiveness of our method, resulting in high-quality, unbiased outputs across various styles and domains.


## Usage and Recommendations


- Requires a CLIP skip of -3

This model supports and encourages experimentation with various tags, offering users the freedom to explore their creative visions in depth.

## License

This model is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license.