Quantization made by Richard Erkhov.
MagicPrompt-tinystories-33M-epoch10-merged - bnb 4bits
- Model creator: https://huggingface.co/Technotech/
- Original model: https://huggingface.co/Technotech/MagicPrompt-tinystories-33M-epoch10-merged/
Original model description:
library_name: transformers license: apache-2.0 datasets: - Gustavosta/Stable-Diffusion-Prompts language: - en tags: - completion widget: - text: A picture of - text: photo of - text: a drawing of inference: parameters: max_new_tokens: 20 do_sample: True early_stopping: True temperature: 1.2 num_beams: 5 no_repeat_ngram_size: 2 repetition_penalty: 1.35 top_k: 50 top_p: 0.75
MagicPrompt TinyStories-33M (Merged)
Info
Magic prompt completion model trained on a dataset of 80k Stable Diffusion prompts. Base model: TinyStories-33M. Inspired by MagicPrompt-Stable-Diffusion.
Model seems to be pretty decent for 33M params due to the TinyStories base, but it clearly lacks much of an understanding of pretty much anything. Still, considering the size, I think it's decent. Whether you would use this over a small GPT-2 based model is up to you.
Examples
Best generation settings I found: max_new_tokens=40, do_sample=True, temperature=1.2, num_beams=10, no_repeat_ngram_size=2, early_stopping=True, repetition_penalty=1.35, top_k=50, top_p=0.55, eos_token_id=tokenizer.eos_token_id, pad_token_id=0
(there may be better settings).
no_repeat_ngram_size
is important for making sure the model doesn't repeat phrases (as it is quite small).
(Bold text is generated by the model)
"found footage of a ufo in the forest, by lusax, wlop, greg rutkowski, stanley artgerm, highly detailed, intricate, digital painting, artstation, concept art, smooth"
"A close shot of a bird in a jungle, with two legs, with long hair on a tall, long brown body, long white skin, sharp teeth, high bones, digital painting, artstation, concept art, illustration by wlop,"
"Camera shot of a strange young girl wearing a cloak, wearing a mask in clothes, with long curly hair, long hair, black eyes, dark skin, white teeth, long brown eyes eyes, big eyes, sharp"
"An illustration of a house, stormy weather, sun, moonlight, night, concept art, 4 k, wlop, by wlop, by jose stanley, ilya kuvshinov, sprig"
"A field of flowers, camera shot, 70mm lens, fantasy, intricate, highly detailed, artstation, concept art, sharp focus, illustration, illustration, artgerm jake daggaws, artgerm and jaggodieie brad"
Next steps
- Larger dataset ie neuralworm/stable-diffusion-discord-prompts or daspartho/stable-diffusion-prompts
- More epochs
- Instead of going smaller than GPT-2 137M, fine tune a 1-7B param model
Training config
- Rank 16 LoRA
- Trained on Gustavosta/Stable-Diffusion-Prompts for 10 epochs
- Batch size of 64
Training procedure
The following bitsandbytes
quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
Framework versions
- PEFT 0.5.0.dev0
- Downloads last month
- 2