Spaces:
Running
on
CPU Upgrade
Tagging created entities
On of the great problems this type of AI software has yet to master is the ability to create new characters and then use those characters as the basis for new art. Suppose, I prompt for for: An athletic an young wizard.
The software generates a few for me to choose from. However, if I like the look of one such generated character, lets call him Joe, there is no way to tag that image and tell the software, now show me Joe on the deck of a boat, and now show me joe fighting a monster, now show me joe seated at a table conducting negotiations.
This is the kind of feature and functionality that will allow non-artist to do something more substantive, say for example helping to create a graphic novel.
You can feed in existing images as a prompt to Stable Diffusion, so I guess it would be possible to generate a character and then feed that into new prompts, iteratively. But I don't think this is currently simple, so you are right, it would be a big advance. I think we are still in the "discovering what it can do" phase and the more practical, commercial applications will follow on from that.
Yes, you are right, we are a long way from figuring out what we can do with these tools. I don't think that a single image would function well as a seeding/training set. I would like to try however. Do you have a link to an article that explains how to use a seeding image to generate new images?
I've only done it via the command-line version, which I downloaded. So you use something like this:
python scripts/txt2img.py--n_samples 10 --init-img inputs/yourimage.jpg --prompt "optional extra text prompt"
Where you create a directory called "inputs" under where you unpacked SD. But getting it running on a PC can be tough if you're not technical and requires a powerful Nvidia graphic card with lots of vRAM. Even then I had to scale down the images I used as input otherwise it ran out of memory.
I followed this article: https://www.howtogeek.com/830179/how-to-run-stable-diffusion-on-your-pc-to-generate-ai-images/
But then I had to use optimised versions of the scripts from https://github.com/basujindal/stable-diffusion/tree/main/optimizedSD to conserve RAM. Good luck!
I have the necessary expertise, given my masters in computer science, however I don't have the necessary computer hardware. I have also been thinking of how to edit an already created image. Suppose, I asked for a barren desert landscape with rocky mountains shown on a very distant horizon. The after I got the image I wanted, I decided I want the mountains to have more of a reddish appearance and I wanted the sky to be a deeper azure blue with only the faintest whisper of white clouds. Then after that I wanted a circular stone tablet placed in the foreground partially covered by the sands of the desert and some footprints leading away through the sand towards the mountains.... I could go on, but you get the idea. There is so much obvious room for innovations and improvements in the space of this technology. Very exciting, if such extensions to the software can be created.
Another thought comes to me. Suppose you want to create a "painting style" all human artists have a unique style. It would be nice to be able to give each drawing in a series a unique style and character.