I tried at windows, the output are all black images, I get this prompt error while the gui is loading:
‘’’
torchvision\io\image.py:13: UserWarning: Failed to load image Python extension:
torch_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x000001F6781C65E0>.
warnings.warn(f”Unable to retrieve source for @torch.jit._overload function: {func}.”)
torch_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x000001F6781C6820>.
warnings.warn(f”Unable to retrieve source for @torch.jit._overload function: {func}.”)
‘’’
so I believe that maybe I have the wrong python installed or something like this? I’ve just unziped the files and executed the .exe as manager so in theory, it should work.
Could be, but im pretty sure its not. There is a setting to redner a second image, that could be it. Furthermore i am not having this issue, and i cant find any other evidence.
pretty sure you have a line break after your prompt, and it is generating an "empty" image because of the empty line... happens often when you copy your prompt from somewhere etc, or press "enter" at the end
I've had that happen a few times, it happens when you accidentally leave an empty line after the end of your prompts, like if you hit ENTER one too many times, or left a line after deleting a bunch of prompts.
I have a Tesla m60, which works like an SLI card. Can we add functions like;
Use all GPUs or, Check with a marker with GPU (0), GPU (1), etc? as you can see, GPU 1 is not using it's resources, With that maybe I can reach full of 16 GB's of Vmemory
There is no NSFW filter here, though I'd assume the Stable DIffusion model (out of GRisk's control) probably isn't trained on anything more lewd than nudity.
erry girl longshot, blonde curly hair, flowers crown, hyper realistic, pale skin, 8k, extreme detail, detailed drawing, trending artstation, hd, fantasy, d&d, realistic studio lighting, by alphonse mucha, greg rutkowski, sharp focus, elegant with graffiti background model shoot, apocalypse took like 10 mins do not recommend seed 3314742242 steps 50
Getting the following error: RuntimeError: CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 2.00 GiB total capacity; 1.59 GiB already allocated; 17.26 MiB free; 1.62 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
My VGA card is a GeForce GT730, can i do anything with it?
Hi the only way to do this would be to try and generate the image at a smaller size. the vram recommendation is 4gb min and 6gb to generate images at the standard size of 512 by 512
This program requires Nvidia cards so it will never run on a Mac and AMD doesn't care about artificial intelligence (unfortunately for them) so cry yourself a river.
Resolutions must be multiples of 64. 512x512 is the standard size, if you change either, add 64 or subtract 64. For example, 576x640 will work, as will 448x384. So long as both numbers are multiples of 64 and you have enough vram, you should be fine.
A shame this requires a Nvidia GPU, it looks interesting and a couple of my friends can play with it, but my computer uses an AMD. I saw in the comments that there was a way to use it without that GPU but it requires knowledge of python to even install, and I've never used it and don't know where/how to start. I did install the latest version of the software but I don't know how to do anything in it.
Well, regarding making it work on AMD it's rather easy. 1) tell AMD to write a GPU API like CUDA 2) tell AMD to write tensor/neural network like CUDAnn for API they created in point 1). 3) ask someone to rewrite all python libraries to the API created in point 2) and finally 4) prepare a package using this API.
AMD isn't the focus for these kind of programs, and probably won't be for the foreseeable future. its sucky but you might have to commission a python coder or just wait until someone releases a version for AMD if you're not familiar with python. GL
The "Samples per Prompt" input doesn't seem to do anything. It would great if I put 10 in there and it will render 10 of the same image with variations in the seed. I prefer them not being in a grid, though some people seem to prefer that.
I would like the ability to save EACH iteration. So if I have 50 steps, it will generate 50 images show me what it looked like at each step. I've found that sometimes I have a good start to an image at 50, so I bump it up to 200, but then I find that 100 actually turned out better than 200 because 200 wiped out some features that I liked, etc.. just a thought :)
One thing I've noticed from doing renderings is that there should be billions of combinations with the seed, and yet it's not rare for the mistletoe to reuse a seed when set to -1. I noticed this with identical or very close requests, while if it really took a random seed these coincidences should not be possible...
If he does this, then there is a chance that several people with the same ideas will end up with quite similar images because the chosen seed isn't random enough.
Yes it seems there is some deterministic random algorithm, I've noticed this too especially when I set back to -1 (random) then it starts choosing random seeds it already used before.
The random number generator is deterministic, needs to use something like a hash of the current millisecond or otherwise use a derivative of a changing variable as a seed.
1. Can we get a link to your Patreon somewhere on your Itch profile?
2. How long of a delay is there between the paid and free releases? I see 0.3 is released on the Patreon, but I don't think the Itch release has been updated at all yet.
3. Is there a free changelog for the paid versions posted publicly somewhere? Kinda stinky the public version here has some major bugs and I'd like to know if the paid versions have fixed them yet.
4. Is there any way to know what specific model it's using, and what version? The Dezgo site is talking about using a v1.4 now, but this app just shows "default" in the model dropdown box.
EDIT: just as I posted this, the app softlocked while trying to generate a prompt. No errors or anything, either.
Why not? Even something minimal is something and chasing more steps to refine an image as much as possible is something I'd like to do. Mostly for images I like after doing the quick 10 second 50 step renders. I'd even try 10,000 steps since it'd only take like 30 minutes with my GPU.
What has been you best resolution for landscape and portrait?
Also, I know the engine supports more than 500 samples where would I change that in the python files? The file structure for this tool is a massive mess. LOL.
I haven't tried with this tool but you should be able to do that by increasing your Samples per prompt to a high number. I do that with a command line interface with a different branch. As of yet, that function doesn't work with save to grid which is what I'm looking for. Wake up to many 12-image grids. I can wake up to 100+ of images though just by increasing the number of images generated per prompt. Got 101 the other night and then upsacled the 4X in the AM.
i cannot get diffusion to use my EGPU, instead it uses the worse Dgpu in the laptop even when selecting the 2060 manually in windows settings the program runs on the 2050, limiting vram availability. anyone know a workaround?
I had a similar issue and didn't find a solution. It was using my faster GPU with less VRAM where I couldn't achieve 512x512. I installed a different tool and edited a file that allowed me to point it at my slower GPU with more VRAM and now I can get 512X704.
i get an error when extracting "data error: stable diffusion GRisk GUI/torch/lib/torch_cuda_cu.dll" and when i try to run the program it just opens a console window for a half a second and closes.
I'm using a 1650 on a laptop, 256 x 256 image produces completely black result. I don't know what to do! Does anyone know a solution or at least what's causing the problem?
I had a Vram problem at 512 x 512, but I lowered the resolution to 256 x 256. I checked the console - no errors. Apparently, it's written in the project, that the 16 series have problem with the half resolution option. It's the default one and cannot be unticked. Maybe I can find a workaround to find a way to turn it off?
Pytorch is not clearing the vram after each run forcing me to shut down the software to free up the GPU memory between each use. A few other posts sound like maybe they are experiencing similar.
← Return to tool
Comments
Log in with itch.io to leave a comment.
Stable diffusion is really powerfull.
I tried at windows, the output are all black images, I get this prompt error while the gui is loading:
‘’’ torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: torch_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x000001F6781C65E0>. warnings.warn(f”Unable to retrieve source for @torch.jit._overload function: {func}.”) torch_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x000001F6781C6820>. warnings.warn(f”Unable to retrieve source for @torch.jit._overload function: {func}.”) ‘’’
so I believe that maybe I have the wrong python installed or something like this? I’ve just unziped the files and executed the .exe as manager so in theory, it should work.
Exactly the same problem.
Could be, but im pretty sure its not. There is a setting to redner a second image, that could be it. Furthermore i am not having this issue, and i cant find any other evidence.
pretty sure you have a line break after your prompt, and it is generating an "empty" image because of the empty line... happens often when you copy your prompt from somewhere etc, or press "enter" at the end
I've had that happen a few times, it happens when you accidentally leave an empty line after the end of your prompts, like if you hit ENTER one too many times, or left a line after deleting a bunch of prompts.
4.5 stars.
saving the output folder would be very nice :D
Can't wait to try img2img here
I tried it on the patreon version and it's amazing! Hope it gets to itch for you guys as well soon :)
I've been using the NMKD GUI in the mean time
I have a Tesla m60, which works like an SLI card. Can we add functions like;
Use all GPUs or, Check with a marker with GPU (0), GPU (1), etc?
as you can see, GPU 1 is not using it's resources,
With that maybe I can reach full of 16 GB's of Vmemory
Hey, is there any way to remove the NSFW filter? (asking for a friend)
nsfw filter?
They want to generate...” content” with this. NSFW content.
I'm fairly certain its disabled by default, I didn't have to change or disable anything in the program.
"asking for a friend"
There is no NSFW filter here, though I'd assume the Stable DIffusion model (out of GRisk's control) probably isn't trained on anything more lewd than nudity.
Id like a direct download link and a version packaged as .zip please! also, cli mode is broken... but its an awsm tool!
could you tell me the paramenters for this beautiful woman? text, seed, etc..?
beautiful girl longshot, red hair, flower crown, hyper realistic, pale skin, 4k, extreme detail, detailed drawing, trending artstation, hd, fantasy, d&d, realistic lighting, by alphonse mucha, greg rutkowski, sharp focus, elegant
'steps': 60,
'vscale': 8.0,
'seed': 3314742242,
'resX': 512, 'resY': 704, '
thanks mate, you are a real hero!
Kind of funny stuff, version 0.3 made her smile! with the same prompt and seed, thanks for sharing btw!
thank you so much im so new to this changed her hair made it curly and blonde
erry girl longshot, blonde curly hair, flowers crown, hyper realistic, pale skin, 8k, extreme detail, detailed drawing, trending artstation, hd, fantasy, d&d, realistic studio lighting, by alphonse mucha, greg rutkowski, sharp focus, elegant with graffiti background model shoot, apocalypse took like 10 mins
do not recommend seed 3314742242 steps 50
Getting the following error: RuntimeError: CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 2.00 GiB total capacity; 1.59 GiB already allocated; 17.26 MiB free; 1.62 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
My VGA card is a GeForce GT730, can i do anything with it?
Hi the only way to do this would be to try and generate the image at a smaller size. the vram recommendation is 4gb min and 6gb to generate images at the standard size of 512 by 512
This error has nothing to do with GPU, people with 12GB and 8GB of vram are also facing same issue. and reducing the size did not help
Did you manage to fix it? If so, please share your solution.
rtx 2060 super 704x512
Steps 60
V Scale 8.00
V
hope you can make that runs in Mac, awesome works, thanks
I also would really appreciate a Mac version. I tried running it in Parallels and it did not work.
hope the make on for Mac
This program requires Nvidia cards so it will never run on a Mac and AMD doesn't care about artificial intelligence (unfortunately for them) so cry yourself a river.
Of course you can't! You are trying to fool the program BUT it REQUIRES Nvidia hardware! How technologically ignorant can you be?
man i have a 1660. man pls just pls get the half precision thing done next update. this is not a demand. however it's a beg.
How can i fix this. :(
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 8 but got size 7 for tensor number 1 in the list.
Resolutions must be multiples of 64. 512x512 is the standard size, if you change either, add 64 or subtract 64. For example, 576x640 will work, as will 448x384. So long as both numbers are multiples of 64 and you have enough vram, you should be fine.
are the rest of the features like img2img and upscaling going to be paywall locked to patreon?
Help! I always get black images, on any resolution... no errors, and all looks fine, but always generated totally black pngs!!
Great work so far! Will you be adding img2img soon?
It is present already in the patreon version of the GUI. Hopefully it is added to the public release soon.
A shame this requires a Nvidia GPU, it looks interesting and a couple of my friends can play with it, but my computer uses an AMD. I saw in the comments that there was a way to use it without that GPU but it requires knowledge of python to even install, and I've never used it and don't know where/how to start. I did install the latest version of the software but I don't know how to do anything in it.
Any advice/tips would be appreciated.
Well, regarding making it work on AMD it's rather easy. 1) tell AMD to write a GPU API like CUDA 2) tell AMD to write tensor/neural network like CUDAnn for API they created in point 1). 3) ask someone to rewrite all python libraries to the API created in point 2) and finally 4) prepare a package using this API.
AMD isn't the focus for these kind of programs, and probably won't be for the foreseeable future. its sucky but you might have to commission a python coder or just wait until someone releases a version for AMD if you're not familiar with python. GL
Understandable. And I have friends who can play with the app as is, it's just bad luck with my PC I guess.
The "Samples per Prompt" input doesn't seem to do anything. It would great if I put 10 in there and it will render 10 of the same image with variations in the seed. I prefer them not being in a grid, though some people seem to prefer that.
Not implemented on the Itch version yet.
I would like the ability to save EACH iteration. So if I have 50 steps, it will generate 50 images show me what it looked like at each step. I've found that sometimes I have a good start to an image at 50, so I bump it up to 200, but then I find that 100 actually turned out better than 200 because 200 wiped out some features that I liked, etc.. just a thought :)
Great idea. This should be a checkbox, save the iterations in a subfolder.
One thing I've noticed from doing renderings is that there should be billions of combinations with the seed, and yet it's not rare for the mistletoe to reuse a seed when set to -1. I noticed this with identical or very close requests, while if it really took a random seed these coincidences should not be possible...
If he does this, then there is a chance that several people with the same ideas will end up with quite similar images because the chosen seed isn't random enough.
Yes it seems there is some deterministic random algorithm, I've noticed this too especially when I set back to -1 (random) then it starts choosing random seeds it already used before.
The random number generator is deterministic, needs to use something like a hash of the current millisecond or otherwise use a derivative of a changing variable as a seed.
@grisk do you plan to add img2img ?
It's already added on either 0.2 or 0.3, as far as I can tell. GRisk just hasn't put it here on Itch.
Where can I find it then ?
https://www.patreon.com/DAINAPP
Yes this is my #1 request please.
1. Can we get a link to your Patreon somewhere on your Itch profile?
2. How long of a delay is there between the paid and free releases? I see 0.3 is released on the Patreon, but I don't think the Itch release has been updated at all yet.
3. Is there a free changelog for the paid versions posted publicly somewhere? Kinda stinky the public version here has some major bugs and I'd like to know if the paid versions have fixed them yet.
4. Is there any way to know what specific model it's using, and what version? The Dezgo site is talking about using a v1.4 now, but this app just shows "default" in the model dropdown box.
EDIT: just as I posted this, the app softlocked while trying to generate a prompt. No errors or anything, either.
GTX 1650 Super produces a black image with a windows 10 "gaming" driver installed. Does anyone have the same trouble?
Same.
Is there any way to uncap the number of steps? I know there's severe diminishing returns but I still wanna try making images with more than 500 steps.
Why tho ? I've tried the same seed with steps from 25 to 500 and the difference is minimal, not worth the wait.
Why not? Even something minimal is something and chasing more steps to refine an image as much as possible is something I'd like to do. Mostly for images I like after doing the quick 10 second 50 step renders. I'd even try 10,000 steps since it'd only take like 30 minutes with my GPU.
I am trying to install this. It is showing this. How to fix this problem ?
Troubleshoot so far:
If you have less than 8Gb of VRAM use 512x316 or less Y
If you have only blobs, you need to have at least 512xXXX
stable diffusion filenotfounderror winerror 3: Relocate your save folder
If you want a loop of creating images, just spam the console with your input. Since "each new line is a new image"
Upon rendering start, it throws the following error and quits:
"Could not load library cudnn_ops_inter64_8.dll. Error code 126. Please make sure cudnn_ops_inter64_8.dll is in you library path!"
However I checked and this file is located in "./torch/lib".
Any ideas?
Edit: Found the solution. I simply moved the directory one level up.
Love the GUI, but I run into one problem. Not sure it is GUI related because there should be an easy solution.
Running on Windows I have the error with to many prompts. Tried to change the "sample_path" but it don't change anything.
Maybe someone can explain the process noob friendly, short prompts are very restrictive.
I am trying to run this. It is showing this. How to fix this problem ?
Decrease resolution. You only have 2GB of VRAM.
What is the ideal resolution for this ?
Running this locally needs quite a high end GPU. You can always try some really low resolutions if they work, but the results won't be optimal.
I tried 64 × 64 . It is still showing error.
restart every time after having that error, and try gradually increasing from 64x64
The description says 4 GB of VRAM should be enough, but it runs out even on the lowest resolution. Is this normal?
How can I generate more varations from an already generated result ? Like midourney ai.
Use the seed and modify lightly the prompt
Some other models have the img2img feature. This one doesn't.
Hmmm why is it free?
It's open source and it uses your own GPU ressources
The people over at StabbilityAI released the model as an OpenRAIL license. They do charge on their own site for renders but that's for GPU time.
https://stability.ai/blog/stable-diffusion-public-release
My RTX 3070 with 8GB for VRAM can only do 512X512 max.
Grisk create a front end for the model and packaged everything up for us.
I also have RTX 3070 and I can do 576x576
Nice. I'll try that.
What has been you best resolution for landscape and portrait?
Also, I know the engine supports more than 500 samples where would I change that in the python files? The file structure for this tool is a massive mess. LOL.
512x704 for portrait
On my 3080ti with 12GB ram I can do up to 704x704, if anyone's wondering.
This would be amazing with an option to run in a loop until stopped manually.
Leave it over night and get hundreds of variations.
I haven't tried with this tool but you should be able to do that by increasing your Samples per prompt to a high number. I do that with a command line interface with a different branch. As of yet, that function doesn't work with save to grid which is what I'm looking for. Wake up to many 12-image grids. I can wake up to 100+ of images though just by increasing the number of images generated per prompt. Got 101 the other night and then upsacled the 4X in the AM.
That feature doesn't work yet. It is possible to do though if I just copy the same request in multiple lines.
Need to copy paste a bunch of times if I want hundreds of results though.
I use an auto clicker to do so. Just run it once, check the time it takes and then put the time on your autoclicker and voila!
Just copy and paste the prompt a few hundred times into the field. Each line of the input field is a new execution.
A frog made out of a strawberry
A frog made out of a strawberry
A frog made out of a strawberry
A frog made out of a strawberry
A frog made out of a strawberry
Would generate 5 results.
A WOMAN MADE OUT OF STRAWBERRY
i cannot get diffusion to use my EGPU, instead it uses the worse Dgpu in the laptop even when selecting the 2060 manually in windows settings the program runs on the 2050, limiting vram availability. anyone know a workaround?
I had a similar issue and didn't find a solution. It was using my faster GPU with less VRAM where I couldn't achieve 512x512. I installed a different tool and edited a file that allowed me to point it at my slower GPU with more VRAM and now I can get 512X704.
Installed this.
https://github.com/lstein/stable-diffusion/tree/78aba5b770d6e85e44c730da9735118d...
Edit the dream.py file and change the last bottom bit of code to point at the GPU you desire.
if __name__ == "__main__":
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
main()
i get an error when extracting "data error: stable diffusion GRisk GUI/torch/lib/torch_cuda_cu.dll" and when i try to run the program it just opens a console window for a half a second and closes.
I'm using a 1650 on a laptop, 256 x 256 image produces completely black result. I don't know what to do! Does anyone know a solution or at least what's causing the problem?
Show post...
Not enough Vram maybe, reduce the résolution to see what happen
I had a Vram problem at 512 x 512, but I lowered the resolution to 256 x 256. I checked the console - no errors. Apparently, it's written in the project, that the 16 series have problem with the half resolution option. It's the default one and cannot be unticked. Maybe I can find a workaround to find a way to turn it off?
1660 Ti produces black image at 64x64...
Can Successfully run 512x512 with a 2060 SUPER (8GB VRAM)
Yes.
try 704x512 or 512x704
Pytorch is not clearing the vram after each run forcing me to shut down the software to free up the GPU memory between each use. A few other posts sound like maybe they are experiencing similar.
Got the same RAM ussage, but running the second task replaces the used RAM with the new job, so I have no problems re-using it.