--- license: apache-2.0 datasets: - KBlueLeaf/danbooru2023-webp-4Mpixel - KBlueLeaf/danbooru2023-metadata-database base_model: - black-forest-labs/FLUX.1-schnell - FA770/Sumeshi_Flux.1_S_v002E pipeline_tag: text-to-image tags: - anime - girls --- ![sample_image](./sample_images/1.webp) # Model Information **Note:** This model is a Schnell-based model, but it requires a guidance scale of 3 or 5 and a CFG scale of 3 or higher (not guidance scale) with 20 steps or more. It needs to be used with `clip_l_sumeshi_f1s`. My English is terrible, so I use translation tools. ## Description Sumeshi flux.1 S is an experimental anime model to verify if de-distilling and enabling CFG will function. You can use a negative prompt which works to some extent. Since this model uses CFG, it takes about twice as long to generate compared to a regular FLUX model, even with the same number of steps. The output is blurred and the style varies depending on the prompt, perhaps because the model has not been fully trained. ### v004G This is a test model aimed at reducing blurriness in low-step outputs (around 20 steps) by introducing guidance. Blurriness in both bright and dark outputs has been reduced. Due to training with parameters that push the limits to save time, response to prompts has worsened. The recommended parameters have been updated, so please refer to the `Usage(v004G)` section. After verification, two factors were suspected to cause blurriness, so we reinforced these areas during training. - **Guidance Parameter:** While v002E was filled with zeros, we used He initialization and conducted some training with finetune and the `network_args "in_dims"`. This enabled the guidance scale to function properly. Although the reason is unclear, outputs seem to be abnormal with values other than scales 3 and 5. - **Timesteps Sampling:** Previously, `discrete_flow_shift 3.2` was used, but it was suspected to be a reason for poor response at low steps. Verification results showed that not using shift and having a smaller `sigmoid_scale` reduced blurriness. However, insufficient training leads to noisy backgrounds, so further exploration of hyperparameters seems necessary. ## Usage - Resolution: Like other Flux models - **(Distilled) Guidance Scale:** 3 or 5 - **CFG Scale:** 6 ~ 9 (recommend 7; scale 1 does not generate decent outputs) - **Steps:** 20 ~ 30 (not around 4 steps) - sampler: Euler - scheduler: Simple, Beta ## Prompt Format (from [Kohaku-XL-Epsilon](https://huggingface.co/KBlueLeaf/Kohaku-XL-Epsilon)) ```<1girl/1boy/1other/...>, , , , , , , , ``` Due to the small amount of training, the `` tags are almost non-functional. As training is focused on girl characters, it may not generate boy or other non-persons well. Since the dataset was created using hakubooru, the prompt format will be the same as the KohakuXL format. However, based on experiments, it is not strictly necessary to follow this format, as it interprets meaning to some extent even in natural language. ### Special Tags - **Quality Tags:** masterpiece, best quality, great quality, good quality, normal quality, low quality, worst quality - **Rating Tags:** safe, sensitive, nsfw, explicit - **Date Tags:** newest, recent, mid, early, old ## Training ### Dataset Preparation I used [hakubooru](https://github.com/KohakuBlueleaf/HakuBooru)-based custom scripts. - **Exclude Tags:** `traditional_media, photo_(medium), scan, animated, animated_gif, lowres, non-web_source, variant_set, tall image, duplicate, pixel-perfect_duplicate` - **Minimum Post ID:** 1,000,000 ### Key Addition I added tensors filled with zeros with the `guidance_in` key to the Schnell model. This tensor is adjusted to the shape of the corresponding key in Dev, as inferred from `flux/src/flux/model.py`. This is because the trainer did not work properly when these keys were missing if the model name did not include 'schnell'. Since it is filled with zeros, I understand that guidance, like in the Schnell model, will not function. Due to my limited skills and the forceful addition, I'm not sure if this was the correct approach. ### Training Details Basically, the assumption is that the more we learn, the more the network will be reconstructed, the more the distillation will be lifted, and the more CFGs will be available. - **Training Hardware:** A single RTX 4090 - **Method:** LoRA training and merging the results - **Training Script:** [sd-scripts](https://github.com/kohya-ss/sd-scripts) - **Basic Settings:** `accelerate launch --num_cpu_threads_per_process 4 flux_train_network.py --network_module networks.lora_flux --sdpa --gradient_checkpointing --cache_latents --cache_latents_to_disk --cache_text_encoder_outputs --cache_text_encoder_outputs_to_disk --max_data_loader_n_workers 1 --save_model_as "safetensors" --mixed_precision "bf16" --fp8_base --save_precision "bf16" --full_bf16 --min_bucket_reso 320 --max_bucket_reso 1536 --seed 1 --max_train_epochs 1 --keep_tokens_separator "|||" --network_dim 32 --network_alpha 32 --unet_lr 1e-4 --text_encoder_lr 5e-5 --train_batch_size 3 --gradient_accumulation_steps 2 --optimizer_type adamw8bit --lr_scheduler="constant_with_warmup" --lr_warmup_steps 100 --vae_batch_size 8 --cache_info --guidance_scale 7 --timestep_sampling shift --model_prediction_type raw --discrete_flow_shift 3.2 --loss_type l2 --highvram ` --Continued training from v002E-- 1. 21,000images (res1024 bs1 acc3 warmup50 timestep_sampling sigmoid sigmoid_scale2) 15ecpohs 2. 21,000images (res1024 bs1 acc3 warmup50 sigmoid_scale2 discrete_flow_shift3.5) 15ecpohs 3. merged into model and CLIP_L 4. 3,893images (res1024 bs2 acc1 warmup50 unet_lr5e-5 text_encoder_lr2.5e-5 sigmoid_scale2.5 discrete_flow_shift3 --network_args "loraplus_lr_ratio=8") 3epochs 5. 3,893images (res1024 bs2 acc1 warmup50 unet_lr5e-5 text_encoder_lr2.5e-5 sigmoid_scale2 discrete_flow_shift3 --network_args "loraplus_lr_ratio=8") 1epochs 6. merged into CLIP_L only 7. He initialized "guidance_in" layer 8. 3,893images (Full-finetuned res1024 bs2 acc1 afafactor --optimizer_args "relative_step=False" "scale_parameter=False" "warmup_init=False" lr5e-6 warmup50 guidance_scale3.5 max_grad_norm 0.0 timesteps_sampling discrete_flow_shift 3.1582 ) 1epoch 9. 3,893images (res1024 bs2 acc1 warmup50 guidance_scale1 timesteps_sampling sigmoid sigmoid_scale 0.5 --network_args "in_dims=[8,8,8,8,8]") 4epochs 10. 3,893images (res512 bs2 acc1 warmup50 guidance_scale1 timesteps_sampling sigmoid sigmoid_scale 0.3 --network_args "in_dims=[8,8,8,8,8]") 12epochs 11. 543images (repeats10 res512 bs4 acc1 warmup50 unet_lr3e-4 guidance_scale1 timesteps_sampling sigmoid sigmoid_scale 0.3 --network_args "in_dims=[8,8,8,8,8]") 4epochs 12. merged into model and CLIP_L ## Resources (License) - **FLUX.1-schnell (Apache2.0)** - **danbooru2023-webp-4Mpixel (MIT)** - **danbooru2023-metadata-database (MIT)** ## Acknowledgements - **black-forest-labs:** Thanks for publishing a great open source model. - **kohya-ss:** Thanks for publishing the essential training scripts and for the quick updates. - **Kohaku-Blueleaf:** Thanks for the extensive publication of the scripts for the dataset and the various training conditions.