Edit model card

controlnet-manhattan23/output_train_colormap_coconut

These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning. You can find some example images below.

prompt: A beautiful woman taking a picture with her smart phone.,People underneath an arched bridge near the water. images_0) prompt: A young man bending next to a toilet.,A man is kneeling and holding on to a toilet. images_1) prompt: Two people are sitting on chairs talking on at a corner.,Two men sitting on the street in front of a building. images_2)

Intended uses & limitations

How to use

# TODO: add an example code snippet for running this diffusion pipeline

Limitations and bias

[TODO: provide examples of latent issues and potential remediations]

Training details

[TODO: describe the data used to train the model]

Downloads last month
1
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for manhattan23/output_train_colormap_coconut

Adapter
(590)
this model