Spaces:
Running
Running
title: Flux Inpaint Controlnet | |
emoji: π | |
colorFrom: indigo | |
colorTo: gray | |
sdk: gradio | |
sdk_version: 4.44.0 | |
app_file: app.py | |
hf_oauth: true | |
pinned: false | |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference | |
======= | |
# Flux Inpaint AI Model | |
This Space demonstrates the Flux inpaint AI model, which uses ControlNet for image inpainting tasks. | |
## How to use | |
1. Upload an input image | |
2. Upload a mask image (white areas will be inpainted) | |
3. Enter a prompt describing the desired output | |
4. Adjust the sliders for fine-tuning (optional) | |
5. Click "Submit" to generate the inpainted image | |
## Model Details | |
This Space uses the following models: | |
- Base model: black-forest-labs/FLUX.1-dev | |
- ControlNet model: YishaoAI/flux-dev-controlnet-canny-kid-clothes | |
The inpainting process uses a Canny edge detector for additional control. | |
## Parameters | |
- Strength: Controls the strength of the inpainting effect (0-1) | |
- Number of Inference Steps: More steps can lead to better quality but slower generation | |
- Guidance Scale: Controls how closely the image follows the prompt | |
- ControlNet Conditioning Scale: Adjusts the influence of the ControlNet model | |
Enjoy experimenting with the Flux Inpaint AI Model! | |