Erros
RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 8.00 GiB total capacity; 7.21 GiB already allocated; 0 bytes free; 7.35 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
What can I do?
UPDATE SOLUTION TO MEMORY PROBLEM ON MY LAST POST BELOW
Same. Getting CUDA error. I have 4 GB vram but I can run any models without problem. With this I get error. How to fix
UPDATE SOLUTION TO MEMORY PROBLEM ON MY LAST POST BELOW
UPDATE AGAIN:
Removing "--no-half" from COMMANDLINE_ARGS= in webui-user.bat file seems to fix the memory problem. Close and relaunch webui-user.bat after you edit the file.
UPDATE AGAIN:
Removing "--no-half" from COMMANDLINE_ARGS= in webui-user.bat file seems to fix the memory problem.
how do write that exactly
UPDATE AGAIN:
Removing "--no-half" from COMMANDLINE_ARGS= in webui-user.bat file seems to fix the memory problem.
how do write that exactly
find the file webui-user.bat in your stable diffusion root folder, edit it with notepad and remove the part that says: --no-half
save, close and relaunch it. It's the same file you click to launch automatic1111
how do you add this "--no-half" to bat file how do you write it exactly
UPDATE AGAIN:
Removing "--no-half" from COMMANDLINE_ARGS= in webui-user.bat file seems to fix the memory problem.
how do write that exactly
find the file webui-user.bat in your stable diffusion root folder, edit it with notepad and remove the part that says: --no-half
save, close and relaunch it. It's the same file you click to launch automatic1111
so you -- before no-half
i did not have --no-half in my bat file in first place and got the erra for memery
iam getting this with and without the that inside the bat file memory issue with and without --no-half in the file
UPDATE AGAIN:
Removing "--no-half" from COMMANDLINE_ARGS= in webui-user.bat file seems to fix the memory problem.
how do write that exactly
find the file webui-user.bat in your stable diffusion root folder, edit it with notepad and remove the part that says: --no-half
save, close and relaunch it. It's the same file you click to launch automatic1111so you -- before no-half
iam getting this with and without the that inside the bat file memory issue with and without --no-half in the file
geting this still even without --no-half RuntimeError: CUDA out of memory. Tried to allocate 48.00 MiB (GPU 0; 6.00 GiB total capacity; 5.26 GiB already allocated; 0 bytes free; 5.30 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
UPDATE AGAIN:
Removing "--no-half" from COMMANDLINE_ARGS= in webui-user.bat file seems to fix the memory problem. Close and relaunch webui-user.bat after you edit the file.
geting this still even without --no-half RuntimeError: CUDA out of memory. Tried to allocate 48.00 MiB (GPU 0; 6.00 GiB total capacity; 5.26 GiB already allocated; 0 bytes free; 5.30 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CON
UPDATE AGAIN:
Removing "--no-half" from COMMANDLINE_ARGS= in webui-user.bat file seems to fix the memory problem. Close and relaunch webui-user.bat after you edit the file.
works perfectly for me
UPDATE AGAIN:
Removing "--no-half" from COMMANDLINE_ARGS= in webui-user.bat file seems to fix the memory problem. Close and relaunch webui-user.bat after you edit the file.
geting this still even without --no-half RuntimeError: CUDA out of memory. Tried to allocate 48.00 MiB (GPU 0; 6.00 GiB total capacity; 5.26 GiB already allocated; 0 bytes free; 5.30 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CON
If you don't have --no-half and it still give memory error problem then try this and let me know if it works, in your webui-user.bat file change the lines so it looks like this:
set COMMANDLINE_ARGS=--xformers --deepdanbooru --lowvram --opt-split-attention
set 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:3072'
hi guys.
very stoked to work with this.
I get this error tho:
Loading weights [ffd280ddcf] from /Users/chief/stable-diffusion-webui/models/Stable-diffusion/instruct-pix2pix-00-22000.ckpt
Applying cross attention optimization (InvokeAI).
Weights loaded in 3.3s (load weights from disk: 1.8s, apply weights to model: 0.8s, move model to device: 0.6s).
Processing 1 image(s)
Traceback (most recent call last):
File "/Users/chief/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 337, in run_predict
output = await app.get_blocks().process_api(
File "/Users/chief/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1015, in process_api
result = await self.call_function(
File "/Users/chief/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 833, in call_function
prediction = await anyio.to_thread.run_sync(
File "/Users/chief/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/Users/chief/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/Users/chief/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/Users/chief/stable-diffusion-webui/extensions/stable-diffusion-webui-instruct-pix2pix/scripts/instruct-pix2pix.py", line 128, in generate
model.eval().cuda()
File "/Users/chief/stable-diffusion-webui/venv/lib/python3.10/site-packages/pytorch_lightning/core/mixins/device_dtype_mixin.py", line 128, in cuda
device = torch.device("cuda", torch.cuda.current_device())
File "/Users/chief/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/init.py", line 482, in current_device
_lazy_init()
File "/Users/chief/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/init.py", line 211, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
(btw I'm on Automatic1111 on a M1 Macbook)
(btw I'm on Automatic1111 on a M1 Macbook)
Looks like you may have outdated torch version, did you update automatic1111 to latest version? if so you need to update torch to latest version manually also
(btw I'm on Automatic1111 on a M1 Macbook)
Looks like you may have outdated torch version, did you update automatic1111 to latest version? if so you need to update torch to latest version manually also
I get the same error (M1 Max). When I try to reinstall Torch via --reinstall-torch I only get the message that the requirements are already met. The script of the WebUI gives however the message that one should install better 1.13.1 instead of 1.12.1. But there seems to be no possibility to do so. Does anyone have an idea? :-)
UPDATE AGAIN:
Removing "--no-half" from COMMANDLINE_ARGS= in webui-user.bat file seems to fix the memory problem. Close and relaunch webui-user.bat after you edit the file.
geting this still even without --no-half RuntimeError: CUDA out of memory. Tried to allocate 48.00 MiB (GPU 0; 6.00 GiB total capacity; 5.26 GiB already allocated; 0 bytes free; 5.30 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CON
If you don't have --no-half and it still give memory error problem then try this and let me know if it works, in your webui-user.bat file change the lines so it looks like this:
set COMMANDLINE_ARGS=--xformers --deepdanbooru --lowvram --opt-split-attention
set 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:3072'
After this the model is loaded but when generating the cuda error comes out
how do you update torch manually?
i get requirements met
how do you update torch manually?
i get requirements met
delete the venv folder or the folder in python
is it possible to run at all on M1 max?
can I run dreambooth on M1?
What is the best way to make use of the neural engine for stable diffusion, dreambooth, instruct pix-2-pix ??
when I delete venv and run it goes :
Installing collected packages: charset-normalizer, urllib3, typing-extensions, pillow, numpy, idna, certifi, torch, requests, torchvision
Successfully installed certifi-2022.12.7 charset-normalizer-3.0.1 idna-3.4 numpy-1.24.1 pillow-9.4.0 requests-2.28.2 torch-1.12.1 torchvision-0.13.1 typing-extensions-4.4.0 urllib3-1.26.14
so it reinstalls torch 1.12.1.
also I manually installed torch 1.13 with pip but it keeps saying 1.12 in the bottom of the webui when I launch
Checking Dreambooth requirements...
[+] bitsandbytes version 0.35.0 installed.
[+] diffusers version 0.10.2 installed.
[+] transformers version 4.25.1 installed.
[ ] xformers version N/A installed.
[+] torch version 1.12.1 installed.
[+] torchvision version 0.13.1 installed.
######################################################################################################
Launching Web UI with arguments: --upcast-sampling --use-cpu interrogate
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
No module 'xformers'. Proceeding without it.
not sure what to do
Hi guys does anybody know how to fix this error ? very eager to use this
UPDATE AGAIN:
Removing "--no-half" from COMMANDLINE_ARGS= in webui-user.bat file seems to fix the memory problem. Close and relaunch webui-user.bat after you edit the file.
works perfectly for me
For me removing "--no-half" in webui-user.bat is working as well. (NVIDIA 3080 12GB)
Hello, I have similar issues in using this pre-trained model. It was running fine before on my computer, but after a moment( like two weeks before), it started to raise the error/issue that the Vram of my computer is not enough. For reference, my computer has the graphic card of 3090 with 24GB VRAM.