Spaces:
Running
on
A10G
NSFW
Please turn on the safety net for NSFW content please
Thank you,
I'm not sure what you mean by the safety net, there's no such thing built into FABRIC. If you're having problems with the model generating NSFW content, adding "nsfw" to the negative prompt should help.
Sorry I meant safety checker which basically blocks NSFW content.
I see, as I mentioned, there's no such thing built into FABRIC and would have to be added on top. Since this is just a research demo and provided as-is, I recommend cloning the space or forking the repository in order to add new features and opening a PR if it's something that could benefit the community.
I know the discussion was closed but if you want help I could code something that adds nsfw tags to the negative prompt and filters them from the regular prompt. Idk how “safety nets” are normally implemented and idk how this method would affect normal image generation. When I look up how to make it I get a bunch of articles on how to bypass it instead.
Some context: If you want a neutered version for your kids, thats a fork.
I’m not into generating porn, but I still don’t use photoshop gen-ai at all because of the hairtrigger content filter. It will not do an explosion, it stops at the word “extreme”, and so on.
Midjourney is a bit like that too, but its ok, since its a ready, useable tool.
But an experimental product is rendered unuseable by such filters. You have to be able to play with it, without frustration.
So if you want a safe version for your kids, do a fork, or ask someone to do a fork.
I could easily make it an option like we already have “Enable feedback” I could add “disable NSFW filter” right under it and it would ignore all the filter code
I'd be open to having an option to enable/disable NSFW image filtering. While I personally haven't had issues with the model generating any such content, and although I'd like to emphasize that this is purely a research demo and leaves most responsibility up to the user, I understand that it's undesirable if NSFW content is generated unprompted.
If someone wants to tackle this: It seems like there already exist safety checkers for Stable Diffusion, which might be a good starting point.
- Documentation: https://huggingface.co/docs/diffusers/v0.19.3/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline
- Safety checker on GitHub: https://github.com/huggingface/diffusers/blob/v0.19.3/src/diffusers/pipelines/stable_diffusion/safety_checker.py
- Safety checker model on HF Hub: https://huggingface.co/CompVis/stable-diffusion-safety-checker