Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

THE FINETUNED MODEL IS AVALIBLE ON HUGGINGFACE: HUGGINGFACE FINETUNED MODEL


license: creativeml-openrail-m tags: - stable-diffusion - text-to-image

3d illustration style: This is the fine-tuned Stable Diffusion model trained on images in my attribution.txt file.

Use the tokens 3d illustration style in your prompts for the effect.

from diffusers import StableDiffusionPipeline
import torch
model_id = "aidystark/3Dillustration-stable-diffusion "
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "3d illustration style rendering of anthony joshua 3d illustration style"
image = pipe(prompt).images[0]
image.save("./img.png")

Characters rendered with the model: ![img][https://github.com/aidyai/stable-diffusion-illustration3d/blob/main/data/3d.png?raw=true] ![img][https://github.com/aidyai/stable-diffusion-illustration3d/blob/main/data/ycyxg.png?raw=true]

License

This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:

  1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
  2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
  3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
Downloads last month
31
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.