I was generating images today with this AI when I wanted to see the results it's just black pictures, when I came back here and read the "Important Stuff" section the first line that was written is about GTX 1660 cards not being able to render and unfortunately my card is a GTX 1660 Super so I guess i was unlucky.
can someone explain how to actually paint/mark areas with inpaint? i choose a input and i choose inpaint model, what am i missing after that? all it does is render a new image.
I ran into an issue where I can run a prompt at 100, 200, 400 and 500 iterations, but 300 iterations gives an error: Rendering: anime screenshot wide-shot landscape with house in the apple garden, beautiful ambiance, golden hour
0it [00:00, ?it/s]
Traceback (most recent call last):
File "start.py", line 363, in OnRender
File "torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 152, in __call__
File "diffusers\schedulers\scheduling_pndm.py", line 136, in step
File "diffusers\schedulers\scheduling_pndm.py", line 212, in step_plms
File "diffusers\schedulers\scheduling_pndm.py", line 230, in _get_prev_sample
IndexError: index 1000 is out of bounds for dimension 0 with size 1000
V Scale is 7.50, resolution is 768x512, seed: 12345, on a 3080 12GB.
thanks for the answer yes I enabled the float32 option and got this error RuntimeError: CUDA out of memory. Tried to allocate 114.00 MiB (GPU 0; 4.00 GiB total capacity; 2.99 GiB already allocated; 0 bytes free; 3.35 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Really want to know why this program doesn't go out of memory (talking about system RAM, not VRAM) compared to every other SD repo/GUI. I have only 8GB of system RAM. (Thank you for making this!)
Hello, I got this error when running a 512x512 image, I'm using an nvidia 3060 ti, and 16gb of ram, any solutions?
RuntimeError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 8.00 GiB total capacity; 2.56 GiB already allocated; 2.69 GiB free; 2.58 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I dont understand, 2,69 GiB is not enough to allocate 512 MiB? xD
The free version has a bug that breaks the AI when using 16xx Nvidia cards, it's fixed on the Patreon, annoyingly.
Definitely sets off my pet peeve about devs withholding important bugfixes behind a paywall, though GRisk plans to eventually update the free one, whenever that is.
← Return to tool
Comments
Log in with itch.io to leave a comment.
My video card is missing cuda divice, gpu 3070, only the cpu remains, sd gui 1.8.0
then error on startup (Detected no CUDA-capable GPUs.)
What version of stable diffusion does the Grisk Stable Diffusion GUI 0.1 run?
Does anyone know what is the default Sampling method in GRisk?
I was generating images today with this AI when I wanted to see the results it's just black pictures, when I came back here and read the "Important Stuff" section the first line that was written is about GTX 1660 cards not being able to render and unfortunately my card is a GTX 1660 Super so I guess i was unlucky.
When will you fix this ?
It's been fixed for a long time on the Patreon, the GUI on Itch is just way, WAY out of date.
What is their Patreon ?
https://www.patreon.com/DAINAPP
can someone explain how to actually paint/mark areas with inpaint? i choose a input and i choose inpaint model, what am i missing after that? all it does is render a new image.
for when amd?
whenever i try to load a .ckpt model i get this error message
Traceback (most recent call last):
File "start.py", line 1388, in OnRender
File "start.py", line 1270, in LoadModel
File "convert_original_stable_diffusion_to_diffusers.py", line 630, in Convert
File "omegaconf\omegaconf.py", line 187, in load
FileNotFoundError: [Errno 2] No such file or directory: 'D:\\Downloads\\Stable Diffusion GRisk GUI\\v1-inference.yaml'
I keep getting out of memory errors even when it's only trying to alloc like 9MB. Makes no sense. No solutions online
I ran into an issue where I can run a prompt at 100, 200, 400 and 500 iterations, but 300 iterations gives an error:
Rendering: anime screenshot wide-shot landscape with house in the apple garden, beautiful ambiance, golden hour
0it [00:00, ?it/s]
Traceback (most recent call last):
File "start.py", line 363, in OnRender
File "torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 152, in __call__
File "diffusers\schedulers\scheduling_pndm.py", line 136, in step
File "diffusers\schedulers\scheduling_pndm.py", line 212, in step_plms
File "diffusers\schedulers\scheduling_pndm.py", line 230, in _get_prev_sample
IndexError: index 1000 is out of bounds for dimension 0 with size 1000
V Scale is 7.50, resolution is 768x512, seed: 12345, on a 3080 12GB.
Why steps are limited to 500 (even on patreon 0.52 version (also not sure if it's connected in any way but DiscoDiffusion doesn't have a limit))
Processes three random images every time alongside my query. Pretty sus.
are you sure you don't have any backspace in you text followed by nothing? Cause that creates random pics
Every time this has ever happened to me, it meant I had blank lines in the prompt, either before, or after what I actually typed.
i paid 10 dollars and i am using version 0.52 i still get black screen and i bought but no money is deducted from my card
Patreon only charge you at the end of the mouth.
If you are having this problem, is a hardware bug, you can use the float32 option as a work around, but it will require more vram
thanks for the answer yes I enabled the float32 option and got this error RuntimeError: CUDA out of memory. Tried to allocate 114.00 MiB (GPU 0; 4.00 GiB total capacity; 2.99 GiB already allocated; 0 bytes free; 3.35 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
my graphics card is gtx 1650 super
Any news since a month ago?
float16 (rather than 32) helped? max_split_size_mb:128?
Really want to know why this program doesn't go out of memory (talking about system RAM, not VRAM) compared to every other SD repo/GUI. I have only 8GB of system RAM. (Thank you for making this!)
Not sure why other GUI use that much ram, but your welcome
Dont know were to ask. Can i interface with this in any way with code. i want to try to make a private discord bot for me and my friends:)
Will it be possible to use the DALL-E2 base in future versions? Thx.
Dalle-2 don't have the source code or the model as open source, so its impossible for now
It's a pity (
I'm having this error when loading a dreambooth model:
Loading model (May take a little time.)
{'feature_extractor'} was not found in config. Values will be initialized to default values.
Traceback (most recent call last):
File "start.py", line 320, in OnRender
File "start.py", line 284, in LoadModel
File "diffusers\pipeline_utils.py", line 247, in from_pretrained
TypeError: __init__() missing 1 required positional argument: 'feature_extractor'
Dreambooth models are still not supported. Still need some time to make they work
No problem, with the ckpt converter I don't need this program anymore.
Hello, I got this error when running a 512x512 image, I'm using an nvidia 3060 ti, and 16gb of ram, any solutions?
RuntimeError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 8.00 GiB total capacity; 2.56 GiB already allocated; 2.69 GiB free; 2.58 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I dont understand, 2,69 GiB is not enough to allocate 512 MiB? xD
Thanks in advance
It should totally not use that much memory for a 512X512 image. There is anything else using the vram of the computer?
I got the patreon version, it works great! having tons of fun. Thanks for putting it together!
On my 1660 TI it just outputs black, and its not overclocked.
The free version has a bug that breaks the AI when using 16xx Nvidia cards, it's fixed on the Patreon, annoyingly.
Definitely sets off my pet peeve about devs withholding important bugfixes behind a paywall, though GRisk plans to eventually update the free one, whenever that is.
This bug is more of a hardware problem than a software one. The fix is pretty much a work around the hardware bug.
Please look into adding Dreambooth please!
Yeah, it will be incorporated in the GUI eventually
Is there a Github for this?