Girl lying on the grass
I don't know if it's my problem or the problem with SD official training,sdxl can express this picture very well, but sd3 cannot
Try adding "grotesque monster" into the negative prompt.
As far as I understand, short prompts become nonsense when you write them.
Well my first impression of this model is very disappointing .... What are your impressions ....
@CBodensteiner
it is is really good at some things and really bad at some things
for non human objects, it is SO good. It does text brilliantly and is very smart at prompt following. It's much better then sdxl, stable cascade, and pixarts.
for humans its much worse, its around 50-50. Sometimes it's decent and sometimes its garbage as shown above.
If you are planning for non human image generation, this is a must else use sdxl.
@CBodensteiner it is is really good at some things and really bad at some things
for non human objects, it is SO good. It does text brilliantly and is very smart at prompt following. It's much better then sdxl, stable cascade, and pixarts.for humans its much worse, its around 50-50. Sometimes it's decent and sometimes its garbage as shown above.
If you are planning for non human image generation, this is a must else use sdxl.
Really ? I tried a lot of interior design (non human) stuff and it was very disappointing - sdxl and flash are way better ...
@CBodensteiner could you provide an example prompt? and a image that you would consider good?
i could give you 10 more - so i'm really not impressed with sd3 in the interior domain at least...
@CBodensteiner
that's quite a bad prompt for sd3. sd3 is designed to be better at detailed sentences.
with this prompt
**Cozy Scandinavian Living Room** in a **Villa** basking in the warm **Morning Sun**. Soft **Candlelight** illuminates the **Rustic Wooden** coffee table, while **Creamy White** couches and **Woven Wool** armchairs invite relaxation.
well this is still not good - there are a lot of errors - even with your prompt - look at the furniture - total garbage...
i think they didnt use a lot of training images in this domain...
because they using compressed dataset
Front facing works well. But side profiles of people have weird ears and eyes. And also other poses than standing right in front of the camera. SDXL did better in that regard. Sadly...
The training dataset needs way more photos of people in different poses.
Everything else is vastly improved though. Just look at the texture detail and the vibrance. So I wouldn't say SD3 is bad per se, it's just very bad at (human) anatomy for some reason. You can create some awesome looking landscapes with it for example. And text of course is also vastly improved.
Everything else is vastly improved though. Just look at the texture detail and the vibrance. So I wouldn't say SD3 is bad per se, it's just very bad at (human) anatomy for some reason. You can create some awesome looking landscapes with it for example. And text of course is also vastly improved.
I played with it a bit and it sometimes works (when avoiding humans), but it's just disgusting practice from SAI to poison own model like that - they're killing leftovers of sympathy people had for them with that kind of decisions. Even people who wanted to support them no longer want to look at this company, cause of crippled releases like that.
prompt from another forum:
negative_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature"
@LolaRoseHB : you don't need that kind of crap in negative prompt - SD3 can work without it, too:
but problem is that if you change the prompt into anything that might be even remotely related to what some brain-dead "safety" people considered as unsafe, then whole generation turns into a mess:
So yeah, I really can see why CEOs are letting some "safety" teams go. Good safety team would filter out hardcore and illegal content from training dataset, NOT make the model unable to generate artistic nudity, or lose the understanding of basic anatomy.